Test Report: Docker_Linux_crio 21409

                    
                      432f5d8b8de395ddce63f21c968df47ae82ccbe6:2025-10-18:41964
                    
                

Test fail (41/326)

Order failed test Duration
29 TestAddons/serial/Volcano 0.25
35 TestAddons/parallel/Registry 14.71
36 TestAddons/parallel/RegistryCreds 0.44
37 TestAddons/parallel/Ingress 146.88
38 TestAddons/parallel/InspektorGadget 5.24
39 TestAddons/parallel/MetricsServer 5.31
41 TestAddons/parallel/CSI 400.42
42 TestAddons/parallel/Headlamp 2.53
43 TestAddons/parallel/CloudSpanner 5.25
44 TestAddons/parallel/LocalPath 12.09
45 TestAddons/parallel/NvidiaDevicePlugin 5.25
46 TestAddons/parallel/Yakd 6.24
47 TestAddons/parallel/AmdGpuDevicePlugin 5.28
91 TestFunctional/parallel/DashboardCmd 302.2
98 TestFunctional/parallel/ServiceCmdConnect 602.86
100 TestFunctional/parallel/PersistentVolumeClaim 369.04
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 240.75
128 TestFunctional/parallel/ServiceCmd/DeployApp 600.57
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 73.57
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.85
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.54
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.29
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.18
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.33
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
153 TestFunctional/parallel/ServiceCmd/Format 0.52
154 TestFunctional/parallel/ServiceCmd/URL 0.53
190 TestJSONOutput/pause/Command 2.38
196 TestJSONOutput/unpause/Command 2.03
288 TestPause/serial/Pause 5.81
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.13
302 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.12
311 TestStartStop/group/old-k8s-version/serial/Pause 6.27
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.76
323 TestStartStop/group/no-preload/serial/Pause 5.97
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.39
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.68
339 TestStartStop/group/newest-cni/serial/Pause 6.26
349 TestStartStop/group/embed-certs/serial/Pause 5.98
356 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.83
x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-493618 addons disable volcano --alsologtostderr -v=1: exit status 11 (245.812479ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:17:57.845275  103270 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:17:57.845404  103270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:17:57.845416  103270 out.go:374] Setting ErrFile to fd 2...
	I1018 14:17:57.845420  103270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:17:57.845611  103270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:17:57.845898  103270 mustload.go:65] Loading cluster: addons-493618
	I1018 14:17:57.846296  103270 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:17:57.846315  103270 addons.go:606] checking whether the cluster is paused
	I1018 14:17:57.846406  103270 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:17:57.846420  103270 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:17:57.846779  103270 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:17:57.865198  103270 ssh_runner.go:195] Run: systemctl --version
	I1018 14:17:57.865251  103270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:17:57.883743  103270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:17:57.978701  103270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:17:57.978783  103270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:17:58.011487  103270 cri.go:89] found id: "fcb7161ee1d1b988b7ab1002f35293f0b7091c9bac0d7bde7e8471426df926cf"
	I1018 14:17:58.011508  103270 cri.go:89] found id: "8f357a51c6b5dca76ccd31166d38783da025c5cbb4407847372de403612f8902"
	I1018 14:17:58.011512  103270 cri.go:89] found id: "530e145d6c2e072e1f708ba6909c358faf2489d1fb8aa08aa89f54faf841dbbf"
	I1018 14:17:58.011515  103270 cri.go:89] found id: "84cd4c11831db761e4325adee4879bf6a046512882f96bb1ca62c4882e6c8939"
	I1018 14:17:58.011518  103270 cri.go:89] found id: "10ae25ecd1d9000b05b1943468a2e37d7d7ad234a9cbda7cef9a4926bc9a192c"
	I1018 14:17:58.011521  103270 cri.go:89] found id: "50a19f5b596d4f200db4d0ab0cd9f04fbb94a5ef09f512aeefc81c7d4920b424"
	I1018 14:17:58.011524  103270 cri.go:89] found id: "859d5d72eef12d0f592a58b42cf0be1853b423f2c91a731d854b74ff79470e8f"
	I1018 14:17:58.011526  103270 cri.go:89] found id: "78aea4ac76ed251743d7541eaa9c63acd9c8c2af1858f2862ef1bb654b9423b3"
	I1018 14:17:58.011529  103270 cri.go:89] found id: "775733aea8bf0d456fffa861d7bbf1e97ecc24f79c468dcc0c8f760bfe0b6df0"
	I1018 14:17:58.011535  103270 cri.go:89] found id: "6673efa0776562914d4e7c35e4a4a121d60c7796edfc736b01120598fdf6dfda"
	I1018 14:17:58.011537  103270 cri.go:89] found id: "89679d50a3910712c81dd83e3ab5d310452f13a23c0dce3d8afeb4abec58b99f"
	I1018 14:17:58.011544  103270 cri.go:89] found id: "c52d44cde4f7100b3b7100db0eda92a6716001140bcb111fc1b8bad7bcffcd87"
	I1018 14:17:58.011548  103270 cri.go:89] found id: "92ceaca691f5147122ab362b3936a7201bede1ae2ac6288d04e5d0641f150e2f"
	I1018 14:17:58.011552  103270 cri.go:89] found id: "da0ddb2d0550b3553349e63e5796b3d59ac8c5a7d574c962118808b4750f4934"
	I1018 14:17:58.011555  103270 cri.go:89] found id: "a51f3eea295020493bcd77872d74756d33249e7b25591c8967346f770e49a885"
	I1018 14:17:58.011569  103270 cri.go:89] found id: "ca1869e801d6ed194d599a52234484412f6d9de897b6eadae30ca18325c92762"
	I1018 14:17:58.011573  103270 cri.go:89] found id: "7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca"
	I1018 14:17:58.011579  103270 cri.go:89] found id: "d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e"
	I1018 14:17:58.011583  103270 cri.go:89] found id: "778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750"
	I1018 14:17:58.011587  103270 cri.go:89] found id: "fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa"
	I1018 14:17:58.011594  103270 cri.go:89] found id: "f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4"
	I1018 14:17:58.011602  103270 cri.go:89] found id: "411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5"
	I1018 14:17:58.011610  103270 cri.go:89] found id: "857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1"
	I1018 14:17:58.011615  103270 cri.go:89] found id: "aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8"
	I1018 14:17:58.011618  103270 cri.go:89] found id: ""
	I1018 14:17:58.011658  103270 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 14:17:58.025896  103270 out.go:203] 
	W1018 14:17:58.027334  103270 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:17:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:17:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 14:17:58.027354  103270 out.go:285] * 
	* 
	W1018 14:17:58.032703  103270 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 14:17:58.033924  103270 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-493618 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.40695ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-pdjc2" [d5f69ebc-b615-4334-8fe1-593c6fe7f496] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002722651s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-dddz6" [55ecd78e-f023-433a-80aa-4a98aed74734] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00383065s
addons_test.go:392: (dbg) Run:  kubectl --context addons-493618 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-493618 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-493618 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.263952713s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 ip
2025/10/18 14:18:21 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-493618 addons disable registry --alsologtostderr -v=1: exit status 11 (241.424555ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:18:21.364567  105983 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:18:21.364810  105983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:21.364819  105983 out.go:374] Setting ErrFile to fd 2...
	I1018 14:18:21.364822  105983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:21.365037  105983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:18:21.365327  105983 mustload.go:65] Loading cluster: addons-493618
	I1018 14:18:21.365696  105983 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:21.365711  105983 addons.go:606] checking whether the cluster is paused
	I1018 14:18:21.365788  105983 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:21.365801  105983 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:18:21.366175  105983 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:18:21.383324  105983 ssh_runner.go:195] Run: systemctl --version
	I1018 14:18:21.383404  105983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:18:21.401076  105983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:18:21.497522  105983 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:18:21.497615  105983 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:18:21.530351  105983 cri.go:89] found id: "fcb7161ee1d1b988b7ab1002f35293f0b7091c9bac0d7bde7e8471426df926cf"
	I1018 14:18:21.530377  105983 cri.go:89] found id: "8f357a51c6b5dca76ccd31166d38783da025c5cbb4407847372de403612f8902"
	I1018 14:18:21.530383  105983 cri.go:89] found id: "530e145d6c2e072e1f708ba6909c358faf2489d1fb8aa08aa89f54faf841dbbf"
	I1018 14:18:21.530387  105983 cri.go:89] found id: "84cd4c11831db761e4325adee4879bf6a046512882f96bb1ca62c4882e6c8939"
	I1018 14:18:21.530390  105983 cri.go:89] found id: "10ae25ecd1d9000b05b1943468a2e37d7d7ad234a9cbda7cef9a4926bc9a192c"
	I1018 14:18:21.530396  105983 cri.go:89] found id: "50a19f5b596d4f200db4d0ab0cd9f04fbb94a5ef09f512aeefc81c7d4920b424"
	I1018 14:18:21.530399  105983 cri.go:89] found id: "859d5d72eef12d0f592a58b42cf0be1853b423f2c91a731d854b74ff79470e8f"
	I1018 14:18:21.530403  105983 cri.go:89] found id: "78aea4ac76ed251743d7541eaa9c63acd9c8c2af1858f2862ef1bb654b9423b3"
	I1018 14:18:21.530407  105983 cri.go:89] found id: "775733aea8bf0d456fffa861d7bbf1e97ecc24f79c468dcc0c8f760bfe0b6df0"
	I1018 14:18:21.530420  105983 cri.go:89] found id: "6673efa0776562914d4e7c35e4a4a121d60c7796edfc736b01120598fdf6dfda"
	I1018 14:18:21.530425  105983 cri.go:89] found id: "89679d50a3910712c81dd83e3ab5d310452f13a23c0dce3d8afeb4abec58b99f"
	I1018 14:18:21.530428  105983 cri.go:89] found id: "c52d44cde4f7100b3b7100db0eda92a6716001140bcb111fc1b8bad7bcffcd87"
	I1018 14:18:21.530433  105983 cri.go:89] found id: "92ceaca691f5147122ab362b3936a7201bede1ae2ac6288d04e5d0641f150e2f"
	I1018 14:18:21.530437  105983 cri.go:89] found id: "da0ddb2d0550b3553349e63e5796b3d59ac8c5a7d574c962118808b4750f4934"
	I1018 14:18:21.530445  105983 cri.go:89] found id: "a51f3eea295020493bcd77872d74756d33249e7b25591c8967346f770e49a885"
	I1018 14:18:21.530455  105983 cri.go:89] found id: "ca1869e801d6ed194d599a52234484412f6d9de897b6eadae30ca18325c92762"
	I1018 14:18:21.530459  105983 cri.go:89] found id: "7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca"
	I1018 14:18:21.530466  105983 cri.go:89] found id: "d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e"
	I1018 14:18:21.530470  105983 cri.go:89] found id: "778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750"
	I1018 14:18:21.530473  105983 cri.go:89] found id: "fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa"
	I1018 14:18:21.530477  105983 cri.go:89] found id: "f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4"
	I1018 14:18:21.530481  105983 cri.go:89] found id: "411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5"
	I1018 14:18:21.530485  105983 cri.go:89] found id: "857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1"
	I1018 14:18:21.530489  105983 cri.go:89] found id: "aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8"
	I1018 14:18:21.530493  105983 cri.go:89] found id: ""
	I1018 14:18:21.530539  105983 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 14:18:21.546654  105983 out.go:203] 
	W1018 14:18:21.547940  105983 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 14:18:21.547958  105983 out.go:285] * 
	* 
	W1018 14:18:21.553556  105983 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 14:18:21.555196  105983 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-493618 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.71s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.44s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.814019ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-493618
addons_test.go:332: (dbg) Run:  kubectl --context addons-493618 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-493618 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (252.476097ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:18:21.800852  106095 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:18:21.801132  106095 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:21.801141  106095 out.go:374] Setting ErrFile to fd 2...
	I1018 14:18:21.801145  106095 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:21.801363  106095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:18:21.801645  106095 mustload.go:65] Loading cluster: addons-493618
	I1018 14:18:21.801999  106095 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:21.802014  106095 addons.go:606] checking whether the cluster is paused
	I1018 14:18:21.802099  106095 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:21.802112  106095 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:18:21.802476  106095 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:18:21.819787  106095 ssh_runner.go:195] Run: systemctl --version
	I1018 14:18:21.819866  106095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:18:21.838185  106095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:18:21.935206  106095 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:18:21.935289  106095 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:18:21.969141  106095 cri.go:89] found id: "fcb7161ee1d1b988b7ab1002f35293f0b7091c9bac0d7bde7e8471426df926cf"
	I1018 14:18:21.969171  106095 cri.go:89] found id: "8f357a51c6b5dca76ccd31166d38783da025c5cbb4407847372de403612f8902"
	I1018 14:18:21.969175  106095 cri.go:89] found id: "530e145d6c2e072e1f708ba6909c358faf2489d1fb8aa08aa89f54faf841dbbf"
	I1018 14:18:21.969178  106095 cri.go:89] found id: "84cd4c11831db761e4325adee4879bf6a046512882f96bb1ca62c4882e6c8939"
	I1018 14:18:21.969181  106095 cri.go:89] found id: "10ae25ecd1d9000b05b1943468a2e37d7d7ad234a9cbda7cef9a4926bc9a192c"
	I1018 14:18:21.969184  106095 cri.go:89] found id: "50a19f5b596d4f200db4d0ab0cd9f04fbb94a5ef09f512aeefc81c7d4920b424"
	I1018 14:18:21.969187  106095 cri.go:89] found id: "859d5d72eef12d0f592a58b42cf0be1853b423f2c91a731d854b74ff79470e8f"
	I1018 14:18:21.969189  106095 cri.go:89] found id: "78aea4ac76ed251743d7541eaa9c63acd9c8c2af1858f2862ef1bb654b9423b3"
	I1018 14:18:21.969192  106095 cri.go:89] found id: "775733aea8bf0d456fffa861d7bbf1e97ecc24f79c468dcc0c8f760bfe0b6df0"
	I1018 14:18:21.969198  106095 cri.go:89] found id: "6673efa0776562914d4e7c35e4a4a121d60c7796edfc736b01120598fdf6dfda"
	I1018 14:18:21.969203  106095 cri.go:89] found id: "89679d50a3910712c81dd83e3ab5d310452f13a23c0dce3d8afeb4abec58b99f"
	I1018 14:18:21.969207  106095 cri.go:89] found id: "c52d44cde4f7100b3b7100db0eda92a6716001140bcb111fc1b8bad7bcffcd87"
	I1018 14:18:21.969211  106095 cri.go:89] found id: "92ceaca691f5147122ab362b3936a7201bede1ae2ac6288d04e5d0641f150e2f"
	I1018 14:18:21.969214  106095 cri.go:89] found id: "da0ddb2d0550b3553349e63e5796b3d59ac8c5a7d574c962118808b4750f4934"
	I1018 14:18:21.969218  106095 cri.go:89] found id: "a51f3eea295020493bcd77872d74756d33249e7b25591c8967346f770e49a885"
	I1018 14:18:21.969232  106095 cri.go:89] found id: "ca1869e801d6ed194d599a52234484412f6d9de897b6eadae30ca18325c92762"
	I1018 14:18:21.969240  106095 cri.go:89] found id: "7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca"
	I1018 14:18:21.969250  106095 cri.go:89] found id: "d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e"
	I1018 14:18:21.969253  106095 cri.go:89] found id: "778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750"
	I1018 14:18:21.969255  106095 cri.go:89] found id: "fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa"
	I1018 14:18:21.969257  106095 cri.go:89] found id: "f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4"
	I1018 14:18:21.969260  106095 cri.go:89] found id: "411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5"
	I1018 14:18:21.969262  106095 cri.go:89] found id: "857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1"
	I1018 14:18:21.969264  106095 cri.go:89] found id: "aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8"
	I1018 14:18:21.969266  106095 cri.go:89] found id: ""
	I1018 14:18:21.969313  106095 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 14:18:21.985180  106095 out.go:203] 
	W1018 14:18:21.986582  106095 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 14:18:21.986607  106095 out.go:285] * 
	* 
	W1018 14:18:21.991420  106095 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 14:18:21.992814  106095 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-493618 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.44s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-493618 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-493618 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-493618 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [c74843f4-9057-41e2-930a-0ead88ca57b7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [c74843f4-9057-41e2-930a-0ead88ca57b7] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.00369565s
I1018 14:18:20.598510   93187 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-493618 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.302759865s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-493618 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-493618
helpers_test.go:243: (dbg) docker inspect addons-493618:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748",
	        "Created": "2025-10-18T14:15:35.142040375Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 95181,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T14:15:35.183965001Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748/hosts",
	        "LogPath": "/var/lib/docker/containers/7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748/7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748-json.log",
	        "Name": "/addons-493618",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-493618:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-493618",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748",
	                "LowerDir": "/var/lib/docker/overlay2/6c53b42c7ae30be0a92aeee9616153aa98a76b8686b1ba574fed988eef723540-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c53b42c7ae30be0a92aeee9616153aa98a76b8686b1ba574fed988eef723540/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c53b42c7ae30be0a92aeee9616153aa98a76b8686b1ba574fed988eef723540/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c53b42c7ae30be0a92aeee9616153aa98a76b8686b1ba574fed988eef723540/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-493618",
	                "Source": "/var/lib/docker/volumes/addons-493618/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-493618",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-493618",
	                "name.minikube.sigs.k8s.io": "addons-493618",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a631e0cd76d05941fb0936045345b47fc87f5c3a110522f5c55a7218ec039637",
	            "SandboxKey": "/var/run/docker/netns/a631e0cd76d0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-493618": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:eb:b6:c3:02:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d904be0aa70c1af2cea11004150f1e24caa7082b6124c61db9de726e07acfb2f",
	                    "EndpointID": "8a31c67497c108fe079824c35877145f7cc3de3038048bb81926ece73d316513",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-493618",
	                        "7b0baa1647a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-493618 -n addons-493618
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-493618 logs -n 25: (1.180460323s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p download-docker-735106 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-735106 │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │                     │
	│ delete  │ -p download-docker-735106                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-735106 │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │ 18 Oct 25 14:15 UTC │
	│ start   │ --download-only -p binary-mirror-035412 --alsologtostderr --binary-mirror http://127.0.0.1:38181 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-035412   │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │                     │
	│ delete  │ -p binary-mirror-035412                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-035412   │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │ 18 Oct 25 14:15 UTC │
	│ addons  │ disable dashboard -p addons-493618                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │                     │
	│ addons  │ enable dashboard -p addons-493618                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │                     │
	│ start   │ -p addons-493618 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │ 18 Oct 25 14:17 UTC │
	│ addons  │ addons-493618 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:17 UTC │                     │
	│ addons  │ addons-493618 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ enable headlamp -p addons-493618 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-493618 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-493618 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-493618 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-493618 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-493618 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-493618 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ ssh     │ addons-493618 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ ip      │ addons-493618 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │ 18 Oct 25 14:18 UTC │
	│ addons  │ addons-493618 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-493618                                                                                                                                                                                                                                                                                                                                                                                           │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │ 18 Oct 25 14:18 UTC │
	│ addons  │ addons-493618 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-493618 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ ssh     │ addons-493618 ssh cat /opt/local-path-provisioner/pvc-a6ac2dbf-6d84-47b0-9a9a-79b9ddfd5256_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │ 18 Oct 25 14:18 UTC │
	│ addons  │ addons-493618 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ ip      │ addons-493618 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:20 UTC │ 18 Oct 25 14:20 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 14:15:10
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 14:15:10.844195   94518 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:15:10.844315   94518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:15:10.844327   94518 out.go:374] Setting ErrFile to fd 2...
	I1018 14:15:10.844333   94518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:15:10.844524   94518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:15:10.845093   94518 out.go:368] Setting JSON to false
	I1018 14:15:10.845947   94518 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7062,"bootTime":1760789849,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:15:10.846045   94518 start.go:141] virtualization: kvm guest
	I1018 14:15:10.847714   94518 out.go:179] * [addons-493618] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 14:15:10.849170   94518 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 14:15:10.849206   94518 notify.go:220] Checking for updates...
	I1018 14:15:10.851802   94518 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:15:10.852939   94518 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 14:15:10.854257   94518 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 14:15:10.855457   94518 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 14:15:10.856592   94518 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 14:15:10.857794   94518 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:15:10.881142   94518 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 14:15:10.881259   94518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:15:10.937968   94518 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-18 14:15:10.928477658 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:15:10.938071   94518 docker.go:318] overlay module found
	I1018 14:15:10.939805   94518 out.go:179] * Using the docker driver based on user configuration
	I1018 14:15:10.941011   94518 start.go:305] selected driver: docker
	I1018 14:15:10.941024   94518 start.go:925] validating driver "docker" against <nil>
	I1018 14:15:10.941035   94518 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 14:15:10.941568   94518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:15:10.999497   94518 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-18 14:15:10.990143183 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:15:10.999700   94518 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 14:15:10.999943   94518 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 14:15:11.001690   94518 out.go:179] * Using Docker driver with root privileges
	I1018 14:15:11.002970   94518 cni.go:84] Creating CNI manager for ""
	I1018 14:15:11.003053   94518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 14:15:11.003064   94518 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 14:15:11.003145   94518 start.go:349] cluster config:
	{Name:addons-493618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-493618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1018 14:15:11.004498   94518 out.go:179] * Starting "addons-493618" primary control-plane node in "addons-493618" cluster
	I1018 14:15:11.005651   94518 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 14:15:11.006976   94518 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 14:15:11.008175   94518 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:15:11.008218   94518 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 14:15:11.008231   94518 cache.go:58] Caching tarball of preloaded images
	I1018 14:15:11.008228   94518 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 14:15:11.008318   94518 preload.go:233] Found /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 14:15:11.008329   94518 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 14:15:11.008714   94518 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/config.json ...
	I1018 14:15:11.008737   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/config.json: {Name:mkdee9574b0b95000e535daf1bcb85983e767ae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:11.024821   94518 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 14:15:11.024970   94518 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 14:15:11.024989   94518 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 14:15:11.024994   94518 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 14:15:11.025001   94518 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 14:15:11.025006   94518 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 14:15:23.525530   94518 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 14:15:23.525596   94518 cache.go:232] Successfully downloaded all kic artifacts
	I1018 14:15:23.525645   94518 start.go:360] acquireMachinesLock for addons-493618: {Name:mkcf1dcaefe933480e3898dd01dccab4476df687 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 14:15:23.525773   94518 start.go:364] duration metric: took 97.675µs to acquireMachinesLock for "addons-493618"
	I1018 14:15:23.525804   94518 start.go:93] Provisioning new machine with config: &{Name:addons-493618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-493618 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 14:15:23.525942   94518 start.go:125] createHost starting for "" (driver="docker")
	I1018 14:15:23.527896   94518 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 14:15:23.528207   94518 start.go:159] libmachine.API.Create for "addons-493618" (driver="docker")
	I1018 14:15:23.528245   94518 client.go:168] LocalClient.Create starting
	I1018 14:15:23.528363   94518 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem
	I1018 14:15:23.977885   94518 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem
	I1018 14:15:24.038227   94518 cli_runner.go:164] Run: docker network inspect addons-493618 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 14:15:24.054247   94518 cli_runner.go:211] docker network inspect addons-493618 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 14:15:24.054314   94518 network_create.go:284] running [docker network inspect addons-493618] to gather additional debugging logs...
	I1018 14:15:24.054332   94518 cli_runner.go:164] Run: docker network inspect addons-493618
	W1018 14:15:24.070008   94518 cli_runner.go:211] docker network inspect addons-493618 returned with exit code 1
	I1018 14:15:24.070042   94518 network_create.go:287] error running [docker network inspect addons-493618]: docker network inspect addons-493618: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-493618 not found
	I1018 14:15:24.070073   94518 network_create.go:289] output of [docker network inspect addons-493618]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-493618 not found
	
	** /stderr **
	I1018 14:15:24.070206   94518 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 14:15:24.087173   94518 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e55a00}
	I1018 14:15:24.087222   94518 network_create.go:124] attempt to create docker network addons-493618 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 14:15:24.087280   94518 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-493618 addons-493618
	I1018 14:15:24.145261   94518 network_create.go:108] docker network addons-493618 192.168.49.0/24 created
	I1018 14:15:24.145291   94518 kic.go:121] calculated static IP "192.168.49.2" for the "addons-493618" container
	I1018 14:15:24.145378   94518 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 14:15:24.161100   94518 cli_runner.go:164] Run: docker volume create addons-493618 --label name.minikube.sigs.k8s.io=addons-493618 --label created_by.minikube.sigs.k8s.io=true
	I1018 14:15:24.178649   94518 oci.go:103] Successfully created a docker volume addons-493618
	I1018 14:15:24.178727   94518 cli_runner.go:164] Run: docker run --rm --name addons-493618-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-493618 --entrypoint /usr/bin/test -v addons-493618:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 14:15:30.677122   94518 cli_runner.go:217] Completed: docker run --rm --name addons-493618-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-493618 --entrypoint /usr/bin/test -v addons-493618:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (6.49835529s)
	I1018 14:15:30.677159   94518 oci.go:107] Successfully prepared a docker volume addons-493618
	I1018 14:15:30.677190   94518 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:15:30.677212   94518 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 14:15:30.677277   94518 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-493618:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 14:15:35.066928   94518 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-493618:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.389587346s)
	I1018 14:15:35.066965   94518 kic.go:203] duration metric: took 4.38974774s to extract preloaded images to volume ...
	W1018 14:15:35.067065   94518 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 14:15:35.067125   94518 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 14:15:35.067165   94518 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 14:15:35.125586   94518 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-493618 --name addons-493618 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-493618 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-493618 --network addons-493618 --ip 192.168.49.2 --volume addons-493618:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 14:15:35.438654   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Running}}
	I1018 14:15:35.457572   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:35.476400   94518 cli_runner.go:164] Run: docker exec addons-493618 stat /var/lib/dpkg/alternatives/iptables
	I1018 14:15:35.523494   94518 oci.go:144] the created container "addons-493618" has a running status.
	I1018 14:15:35.523536   94518 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa...
	I1018 14:15:35.628924   94518 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 14:15:35.654055   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:35.673745   94518 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 14:15:35.673769   94518 kic_runner.go:114] Args: [docker exec --privileged addons-493618 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 14:15:35.716664   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:35.738950   94518 machine.go:93] provisionDockerMachine start ...
	I1018 14:15:35.739054   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:35.761798   94518 main.go:141] libmachine: Using SSH client type: native
	I1018 14:15:35.762148   94518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 14:15:35.762167   94518 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 14:15:35.762887   94518 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38702->127.0.0.1:32768: read: connection reset by peer
	I1018 14:15:38.898415   94518 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-493618
	
	I1018 14:15:38.898444   94518 ubuntu.go:182] provisioning hostname "addons-493618"
	I1018 14:15:38.898497   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:38.915941   94518 main.go:141] libmachine: Using SSH client type: native
	I1018 14:15:38.916229   94518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 14:15:38.916247   94518 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-493618 && echo "addons-493618" | sudo tee /etc/hostname
	I1018 14:15:39.059322   94518 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-493618
	
	I1018 14:15:39.059403   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:39.077377   94518 main.go:141] libmachine: Using SSH client type: native
	I1018 14:15:39.077594   94518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 14:15:39.077611   94518 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-493618' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-493618/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-493618' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 14:15:39.210493   94518 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 14:15:39.210526   94518 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 14:15:39.210562   94518 ubuntu.go:190] setting up certificates
	I1018 14:15:39.210574   94518 provision.go:84] configureAuth start
	I1018 14:15:39.210640   94518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-493618
	I1018 14:15:39.227138   94518 provision.go:143] copyHostCerts
	I1018 14:15:39.227219   94518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 14:15:39.227331   94518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 14:15:39.227397   94518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 14:15:39.227463   94518 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.addons-493618 san=[127.0.0.1 192.168.49.2 addons-493618 localhost minikube]
	I1018 14:15:39.766960   94518 provision.go:177] copyRemoteCerts
	I1018 14:15:39.767023   94518 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 14:15:39.767059   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:39.785116   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:39.881305   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 14:15:39.900749   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 14:15:39.918059   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 14:15:39.936428   94518 provision.go:87] duration metric: took 725.836064ms to configureAuth
	I1018 14:15:39.936459   94518 ubuntu.go:206] setting minikube options for container-runtime
	I1018 14:15:39.936620   94518 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:15:39.936726   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:39.953814   94518 main.go:141] libmachine: Using SSH client type: native
	I1018 14:15:39.954104   94518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 14:15:39.954132   94518 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 14:15:40.197505   94518 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 14:15:40.197532   94518 machine.go:96] duration metric: took 4.458558157s to provisionDockerMachine
	I1018 14:15:40.197544   94518 client.go:171] duration metric: took 16.669289178s to LocalClient.Create
	I1018 14:15:40.197568   94518 start.go:167] duration metric: took 16.669361804s to libmachine.API.Create "addons-493618"
	I1018 14:15:40.197580   94518 start.go:293] postStartSetup for "addons-493618" (driver="docker")
	I1018 14:15:40.197594   94518 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 14:15:40.197676   94518 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 14:15:40.197732   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:40.214597   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:40.313123   94518 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 14:15:40.316613   94518 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 14:15:40.316636   94518 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 14:15:40.316649   94518 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/addons for local assets ...
	I1018 14:15:40.316713   94518 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/files for local assets ...
	I1018 14:15:40.316739   94518 start.go:296] duration metric: took 119.152647ms for postStartSetup
	I1018 14:15:40.317068   94518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-493618
	I1018 14:15:40.334170   94518 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/config.json ...
	I1018 14:15:40.334433   94518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 14:15:40.334480   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:40.351086   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:40.444185   94518 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 14:15:40.448983   94518 start.go:128] duration metric: took 16.923022705s to createHost
	I1018 14:15:40.449022   94518 start.go:83] releasing machines lock for "addons-493618", held for 16.923231309s
	I1018 14:15:40.449108   94518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-493618
	I1018 14:15:40.466240   94518 ssh_runner.go:195] Run: cat /version.json
	I1018 14:15:40.466278   94518 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 14:15:40.466315   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:40.466349   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:40.483258   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:40.484430   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:40.575602   94518 ssh_runner.go:195] Run: systemctl --version
	I1018 14:15:40.630562   94518 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 14:15:40.667185   94518 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 14:15:40.672266   94518 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 14:15:40.672342   94518 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 14:15:40.699256   94518 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 14:15:40.699280   94518 start.go:495] detecting cgroup driver to use...
	I1018 14:15:40.699309   94518 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 14:15:40.699382   94518 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 14:15:40.716022   94518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 14:15:40.728685   94518 docker.go:218] disabling cri-docker service (if available) ...
	I1018 14:15:40.728735   94518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 14:15:40.745467   94518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 14:15:40.763518   94518 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 14:15:40.852188   94518 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 14:15:40.941218   94518 docker.go:234] disabling docker service ...
	I1018 14:15:40.941291   94518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 14:15:40.960280   94518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 14:15:40.973519   94518 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 14:15:41.063896   94518 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 14:15:41.148959   94518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 14:15:41.161676   94518 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 14:15:41.176951   94518 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 14:15:41.177026   94518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.187952   94518 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 14:15:41.188013   94518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.197200   94518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.206326   94518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.215130   94518 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 14:15:41.223534   94518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.233043   94518 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.246975   94518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.256324   94518 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 14:15:41.263987   94518 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 14:15:41.264069   94518 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 14:15:41.276695   94518 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 14:15:41.284747   94518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:15:41.360872   94518 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 14:15:41.466951   94518 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 14:15:41.467031   94518 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 14:15:41.471440   94518 start.go:563] Will wait 60s for crictl version
	I1018 14:15:41.471517   94518 ssh_runner.go:195] Run: which crictl
	I1018 14:15:41.475466   94518 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 14:15:41.500862   94518 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 14:15:41.500988   94518 ssh_runner.go:195] Run: crio --version
	I1018 14:15:41.529363   94518 ssh_runner.go:195] Run: crio --version
	I1018 14:15:41.558832   94518 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 14:15:41.560098   94518 cli_runner.go:164] Run: docker network inspect addons-493618 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 14:15:41.577556   94518 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 14:15:41.581897   94518 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 14:15:41.592876   94518 kubeadm.go:883] updating cluster {Name:addons-493618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-493618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 14:15:41.593049   94518 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:15:41.593097   94518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 14:15:41.626577   94518 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 14:15:41.626599   94518 crio.go:433] Images already preloaded, skipping extraction
	I1018 14:15:41.626659   94518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 14:15:41.651828   94518 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 14:15:41.651853   94518 cache_images.go:85] Images are preloaded, skipping loading
	I1018 14:15:41.651862   94518 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 14:15:41.651985   94518 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-493618 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-493618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 14:15:41.652054   94518 ssh_runner.go:195] Run: crio config
	I1018 14:15:41.697070   94518 cni.go:84] Creating CNI manager for ""
	I1018 14:15:41.697097   94518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 14:15:41.697114   94518 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 14:15:41.697135   94518 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-493618 NodeName:addons-493618 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 14:15:41.697247   94518 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-493618"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 14:15:41.697307   94518 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 14:15:41.705749   94518 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 14:15:41.705816   94518 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 14:15:41.714036   94518 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 14:15:41.727518   94518 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 14:15:41.743540   94518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1018 14:15:41.757431   94518 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 14:15:41.761307   94518 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 14:15:41.771339   94518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:15:41.848842   94518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 14:15:41.872471   94518 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618 for IP: 192.168.49.2
	I1018 14:15:41.872502   94518 certs.go:195] generating shared ca certs ...
	I1018 14:15:41.872543   94518 certs.go:227] acquiring lock for ca certs: {Name:mk6d3b4e44856f498157352ffe8bb89d2c7a3998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:41.872726   94518 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key
	I1018 14:15:42.099521   94518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt ...
	I1018 14:15:42.099554   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt: {Name:mk29e474ac49378e3174669d30b699a0927d5939 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.099735   94518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key ...
	I1018 14:15:42.099748   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key: {Name:mk3df07768d76076523553d14b395d7aec695d8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.099827   94518 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key
	I1018 14:15:42.250081   94518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt ...
	I1018 14:15:42.250114   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt: {Name:mk9a000c7e66e15e6c70533a617d97af7b9526d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.250286   94518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key ...
	I1018 14:15:42.250299   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key: {Name:mked80e35481d07e9d2732a63324e9497996df0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.250389   94518 certs.go:257] generating profile certs ...
	I1018 14:15:42.250444   94518 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.key
	I1018 14:15:42.250458   94518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt with IP's: []
	I1018 14:15:42.310573   94518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt ...
	I1018 14:15:42.310609   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: {Name:mk817a96b6e7e4f2d967cd0f6b75836e15e32578 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.310772   94518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.key ...
	I1018 14:15:42.310783   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.key: {Name:mk2dc922e6933c9c6580f2453368c5810f4e481e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.310862   94518 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.key.4bff4883
	I1018 14:15:42.310880   94518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.crt.4bff4883 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 14:15:42.431608   94518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.crt.4bff4883 ...
	I1018 14:15:42.431643   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.crt.4bff4883: {Name:mkde2f0f0e05a8a44b434974d8b466c73645d4e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.431833   94518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.key.4bff4883 ...
	I1018 14:15:42.431850   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.key.4bff4883: {Name:mk6d2906da3206d1dab9c1811118ad12e5d1f944 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.431945   94518 certs.go:382] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.crt.4bff4883 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.crt
	I1018 14:15:42.432038   94518 certs.go:386] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.key.4bff4883 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.key
	I1018 14:15:42.432090   94518 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.key
	I1018 14:15:42.432109   94518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.crt with IP's: []
	I1018 14:15:42.629593   94518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.crt ...
	I1018 14:15:42.629624   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.crt: {Name:mkde5d9905c941564c933979fd5fade029103944 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.629812   94518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.key ...
	I1018 14:15:42.629826   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.key: {Name:mk36751e3ce77bf92cb13f27a98497c7ed9795bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.630014   94518 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 14:15:42.630049   94518 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem (1082 bytes)
	I1018 14:15:42.630071   94518 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem (1123 bytes)
	I1018 14:15:42.630096   94518 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem (1675 bytes)
	I1018 14:15:42.630764   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 14:15:42.650117   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 14:15:42.669226   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 14:15:42.690282   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 14:15:42.710069   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 14:15:42.728502   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 14:15:42.746298   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 14:15:42.764293   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 14:15:42.782203   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 14:15:42.801956   94518 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 14:15:42.814811   94518 ssh_runner.go:195] Run: openssl version
	I1018 14:15:42.821181   94518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 14:15:42.832594   94518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:15:42.836604   94518 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:15:42.836664   94518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:15:42.871729   94518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 14:15:42.881086   94518 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 14:15:42.884965   94518 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 14:15:42.885020   94518 kubeadm.go:400] StartCluster: {Name:addons-493618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-493618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:15:42.885113   94518 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:15:42.885177   94518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:15:42.913223   94518 cri.go:89] found id: ""
	I1018 14:15:42.913289   94518 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 14:15:42.921815   94518 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 14:15:42.930869   94518 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 14:15:42.930952   94518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 14:15:42.939927   94518 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 14:15:42.939956   94518 kubeadm.go:157] found existing configuration files:
	
	I1018 14:15:42.940012   94518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 14:15:42.948083   94518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 14:15:42.948160   94518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 14:15:42.955881   94518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 14:15:42.963517   94518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 14:15:42.963574   94518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 14:15:42.971090   94518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 14:15:42.979262   94518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 14:15:42.979341   94518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 14:15:42.986704   94518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 14:15:42.994650   94518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 14:15:42.994702   94518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 14:15:43.002430   94518 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 14:15:43.040520   94518 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 14:15:43.040577   94518 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 14:15:43.062959   94518 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 14:15:43.063081   94518 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 14:15:43.063146   94518 kubeadm.go:318] OS: Linux
	I1018 14:15:43.063197   94518 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 14:15:43.063262   94518 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 14:15:43.063319   94518 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 14:15:43.063359   94518 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 14:15:43.063397   94518 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 14:15:43.063445   94518 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 14:15:43.063497   94518 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 14:15:43.063534   94518 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 14:15:43.122707   94518 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 14:15:43.122870   94518 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 14:15:43.123048   94518 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 14:15:43.130408   94518 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 14:15:43.132493   94518 out.go:252]   - Generating certificates and keys ...
	I1018 14:15:43.132580   94518 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 14:15:43.132638   94518 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 14:15:43.195493   94518 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 14:15:43.335589   94518 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 14:15:43.540635   94518 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 14:15:43.653902   94518 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 14:15:43.807694   94518 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 14:15:43.807847   94518 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-493618 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 14:15:43.853102   94518 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 14:15:43.853283   94518 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-493618 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 14:15:43.971707   94518 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 14:15:44.039605   94518 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 14:15:44.636757   94518 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 14:15:44.636886   94518 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 14:15:45.211213   94518 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 14:15:45.796318   94518 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 14:15:45.822982   94518 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 14:15:46.106180   94518 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 14:15:46.239037   94518 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 14:15:46.239513   94518 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 14:15:46.243151   94518 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 14:15:46.244760   94518 out.go:252]   - Booting up control plane ...
	I1018 14:15:46.244874   94518 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 14:15:46.244990   94518 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 14:15:46.245625   94518 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 14:15:46.260250   94518 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 14:15:46.260360   94518 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 14:15:46.267696   94518 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 14:15:46.267817   94518 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 14:15:46.267866   94518 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 14:15:46.370744   94518 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 14:15:46.370865   94518 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 14:15:47.371649   94518 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000990385s
	I1018 14:15:47.376256   94518 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 14:15:47.376432   94518 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 14:15:47.376566   94518 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 14:15:47.376709   94518 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 14:15:49.135751   94518 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.759510931s
	I1018 14:15:49.255604   94518 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.879264109s
	I1018 14:15:50.878424   94518 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.502192934s
	I1018 14:15:50.890048   94518 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 14:15:50.901423   94518 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 14:15:50.910227   94518 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 14:15:50.910432   94518 kubeadm.go:318] [mark-control-plane] Marking the node addons-493618 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 14:15:50.918188   94518 kubeadm.go:318] [bootstrap-token] Using token: 2jy7nx.1zs0hlvym10ojzfo
	I1018 14:15:50.919589   94518 out.go:252]   - Configuring RBAC rules ...
	I1018 14:15:50.919736   94518 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 14:15:50.923222   94518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 14:15:50.928452   94518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 14:15:50.931223   94518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 14:15:50.933641   94518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 14:15:50.937165   94518 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 14:15:51.285114   94518 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 14:15:51.702798   94518 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 14:15:52.284201   94518 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 14:15:52.285014   94518 kubeadm.go:318] 
	I1018 14:15:52.285123   94518 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 14:15:52.285134   94518 kubeadm.go:318] 
	I1018 14:15:52.285253   94518 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 14:15:52.285261   94518 kubeadm.go:318] 
	I1018 14:15:52.285297   94518 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 14:15:52.285409   94518 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 14:15:52.285497   94518 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 14:15:52.285507   94518 kubeadm.go:318] 
	I1018 14:15:52.285594   94518 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 14:15:52.285604   94518 kubeadm.go:318] 
	I1018 14:15:52.285673   94518 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 14:15:52.285694   94518 kubeadm.go:318] 
	I1018 14:15:52.285777   94518 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 14:15:52.285856   94518 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 14:15:52.285945   94518 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 14:15:52.285954   94518 kubeadm.go:318] 
	I1018 14:15:52.286046   94518 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 14:15:52.286158   94518 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 14:15:52.286173   94518 kubeadm.go:318] 
	I1018 14:15:52.286260   94518 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 2jy7nx.1zs0hlvym10ojzfo \
	I1018 14:15:52.286412   94518 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9845daa2461a34dd973607f8353b114051f1b03075ff9500a74fb21865e04329 \
	I1018 14:15:52.286450   94518 kubeadm.go:318] 	--control-plane 
	I1018 14:15:52.286458   94518 kubeadm.go:318] 
	I1018 14:15:52.286553   94518 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 14:15:52.286561   94518 kubeadm.go:318] 
	I1018 14:15:52.286655   94518 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 2jy7nx.1zs0hlvym10ojzfo \
	I1018 14:15:52.286798   94518 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9845daa2461a34dd973607f8353b114051f1b03075ff9500a74fb21865e04329 
	I1018 14:15:52.288880   94518 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 14:15:52.289078   94518 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 14:15:52.289109   94518 cni.go:84] Creating CNI manager for ""
	I1018 14:15:52.289123   94518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 14:15:52.290888   94518 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 14:15:52.292177   94518 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 14:15:52.296572   94518 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 14:15:52.296594   94518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 14:15:52.309832   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 14:15:52.517329   94518 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 14:15:52.517424   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:52.517457   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-493618 minikube.k8s.io/updated_at=2025_10_18T14_15_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=addons-493618 minikube.k8s.io/primary=true
	I1018 14:15:52.601850   94518 ops.go:34] apiserver oom_adj: -16
	I1018 14:15:52.601988   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:53.102345   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:53.602765   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:54.102512   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:54.602301   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:55.102326   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:55.602077   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:56.102665   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:56.602275   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:57.102902   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:57.602898   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:57.666050   94518 kubeadm.go:1113] duration metric: took 5.148697107s to wait for elevateKubeSystemPrivileges
	I1018 14:15:57.666085   94518 kubeadm.go:402] duration metric: took 14.781070154s to StartCluster
	I1018 14:15:57.666113   94518 settings.go:142] acquiring lock: {Name:mk9d9d72167811b3554435e137ed17137bf54a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:57.666241   94518 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 14:15:57.666666   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/kubeconfig: {Name:mkdc6fbc9be8e57447490b30dd5bb0086611876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:57.666904   94518 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 14:15:57.666964   94518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 14:15:57.667023   94518 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 14:15:57.667176   94518 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:15:57.667191   94518 addons.go:69] Setting ingress-dns=true in profile "addons-493618"
	I1018 14:15:57.667213   94518 addons.go:238] Setting addon ingress-dns=true in "addons-493618"
	I1018 14:15:57.667219   94518 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-493618"
	I1018 14:15:57.667224   94518 addons.go:69] Setting cloud-spanner=true in profile "addons-493618"
	I1018 14:15:57.667225   94518 addons.go:69] Setting yakd=true in profile "addons-493618"
	I1018 14:15:57.667237   94518 addons.go:238] Setting addon cloud-spanner=true in "addons-493618"
	I1018 14:15:57.667243   94518 addons.go:238] Setting addon yakd=true in "addons-493618"
	I1018 14:15:57.667261   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667270   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667306   94518 addons.go:69] Setting registry-creds=true in profile "addons-493618"
	I1018 14:15:57.667319   94518 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-493618"
	I1018 14:15:57.667325   94518 addons.go:238] Setting addon registry-creds=true in "addons-493618"
	I1018 14:15:57.667340   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667333   94518 addons.go:69] Setting ingress=true in profile "addons-493618"
	I1018 14:15:57.667353   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667362   94518 addons.go:238] Setting addon ingress=true in "addons-493618"
	I1018 14:15:57.667347   94518 addons.go:69] Setting gcp-auth=true in profile "addons-493618"
	I1018 14:15:57.667379   94518 addons.go:69] Setting inspektor-gadget=true in profile "addons-493618"
	I1018 14:15:57.667395   94518 addons.go:238] Setting addon inspektor-gadget=true in "addons-493618"
	I1018 14:15:57.667413   94518 mustload.go:65] Loading cluster: addons-493618
	I1018 14:15:57.667421   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667425   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667659   94518 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:15:57.667849   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667856   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667873   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667881   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667885   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667927   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667957   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667977   94518 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-493618"
	I1018 14:15:57.667997   94518 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-493618"
	I1018 14:15:57.668260   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.668538   94518 addons.go:69] Setting volcano=true in profile "addons-493618"
	I1018 14:15:57.668558   94518 addons.go:238] Setting addon volcano=true in "addons-493618"
	I1018 14:15:57.668585   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.668706   94518 addons.go:69] Setting default-storageclass=true in profile "addons-493618"
	I1018 14:15:57.668731   94518 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-493618"
	I1018 14:15:57.668892   94518 addons.go:69] Setting volumesnapshots=true in profile "addons-493618"
	I1018 14:15:57.668932   94518 addons.go:238] Setting addon volumesnapshots=true in "addons-493618"
	I1018 14:15:57.668964   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.669072   94518 addons.go:69] Setting storage-provisioner=true in profile "addons-493618"
	I1018 14:15:57.669100   94518 addons.go:238] Setting addon storage-provisioner=true in "addons-493618"
	I1018 14:15:57.669121   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.669367   94518 out.go:179] * Verifying Kubernetes components...
	I1018 14:15:57.667211   94518 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-493618"
	I1018 14:15:57.669415   94518 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-493618"
	I1018 14:15:57.669445   94518 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-493618"
	I1018 14:15:57.669466   94518 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-493618"
	I1018 14:15:57.669478   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.669495   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.669783   94518 addons.go:69] Setting registry=true in profile "addons-493618"
	I1018 14:15:57.669803   94518 addons.go:238] Setting addon registry=true in "addons-493618"
	I1018 14:15:57.669828   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667372   94518 addons.go:69] Setting metrics-server=true in profile "addons-493618"
	I1018 14:15:57.670134   94518 addons.go:238] Setting addon metrics-server=true in "addons-493618"
	I1018 14:15:57.670161   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667262   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.671078   94518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:15:57.677610   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.677633   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.678278   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.678433   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.680282   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.683274   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.686374   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.687318   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.687981   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.726980   94518 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 14:15:57.727164   94518 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 14:15:57.728296   94518 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 14:15:57.728322   94518 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 14:15:57.728394   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.731709   94518 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 14:15:57.735505   94518 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 14:15:57.735529   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 14:15:57.735623   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.744401   94518 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 14:15:57.746166   94518 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 14:15:57.746193   94518 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 14:15:57.746276   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.753364   94518 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-493618"
	I1018 14:15:57.753422   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.753977   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.757779   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.760961   94518 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 14:15:57.761050   94518 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 14:15:57.761128   94518 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 14:15:57.765412   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 14:15:57.765469   94518 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 14:15:57.765570   94518 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 14:15:57.765575   94518 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 14:15:57.765590   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 14:15:57.765649   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.765678   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.773672   94518 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 14:15:57.782459   94518 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 14:15:57.782523   94518 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 14:15:57.782594   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.782951   94518 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:15:57.783453   94518 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 14:15:57.783474   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 14:15:57.784494   94518 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 14:15:57.785442   94518 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 14:15:57.785814   94518 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:15:57.785850   94518 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 14:15:57.785866   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 14:15:57.785946   94518 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 14:15:57.786008   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.786341   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.795904   94518 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 14:15:57.795986   94518 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 14:15:57.796002   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 14:15:57.796075   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.797016   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 14:15:57.797107   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.797727   94518 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 14:15:57.797746   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 14:15:57.797798   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	W1018 14:15:57.799421   94518 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 14:15:57.802268   94518 addons.go:238] Setting addon default-storageclass=true in "addons-493618"
	I1018 14:15:57.802319   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.802790   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.803968   94518 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 14:15:57.806759   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.806881   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 14:15:57.807070   94518 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 14:15:57.807097   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 14:15:57.807159   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.809404   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 14:15:57.810905   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 14:15:57.812585   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 14:15:57.814158   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 14:15:57.817562   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 14:15:57.818954   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 14:15:57.820159   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 14:15:57.821469   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.822222   94518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 14:15:57.822661   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.825309   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 14:15:57.825341   94518 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 14:15:57.825404   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.843406   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.845448   94518 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 14:15:57.846549   94518 out.go:179]   - Using image docker.io/busybox:stable
	I1018 14:15:57.847761   94518 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 14:15:57.847936   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 14:15:57.848446   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.848859   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.862892   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.865577   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.865604   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.867128   94518 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 14:15:57.867148   94518 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 14:15:57.867202   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.870311   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.875963   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.876057   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.878232   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	W1018 14:15:57.891707   94518 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 14:15:57.891829   94518 retry.go:31] will retry after 359.382679ms: ssh: handshake failed: EOF
	I1018 14:15:57.896432   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.907502   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.909844   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.912211   94518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 14:15:57.988091   94518 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 14:15:57.988173   94518 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 14:15:57.997450   94518 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 14:15:57.997478   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 14:15:58.003508   94518 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 14:15:58.003538   94518 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 14:15:58.006239   94518 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 14:15:58.006263   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 14:15:58.015848   94518 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 14:15:58.015893   94518 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 14:15:58.020396   94518 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 14:15:58.020421   94518 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 14:15:58.024488   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 14:15:58.035697   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 14:15:58.035896   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 14:15:58.038172   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 14:15:58.041347   94518 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 14:15:58.041371   94518 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 14:15:58.049321   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 14:15:58.050245   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 14:15:58.052160   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 14:15:58.061988   94518 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:15:58.062019   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 14:15:58.069226   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 14:15:58.070543   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 14:15:58.074239   94518 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 14:15:58.074279   94518 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 14:15:58.079168   94518 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 14:15:58.079198   94518 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 14:15:58.092132   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:15:58.096100   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 14:15:58.102856   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 14:15:58.122432   94518 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 14:15:58.122460   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 14:15:58.133719   94518 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 14:15:58.133827   94518 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 14:15:58.178253   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 14:15:58.201737   94518 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 14:15:58.201955   94518 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 14:15:58.250630   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 14:15:58.250660   94518 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 14:15:58.257881   94518 node_ready.go:35] waiting up to 6m0s for node "addons-493618" to be "Ready" ...
	I1018 14:15:58.259987   94518 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 14:15:58.305869   94518 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 14:15:58.305892   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 14:15:58.372074   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 14:15:58.495259   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 14:15:58.495413   94518 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 14:15:58.542356   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 14:15:58.542459   94518 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 14:15:58.574546   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 14:15:58.574578   94518 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 14:15:58.610004   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 14:15:58.610119   94518 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 14:15:58.650707   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 14:15:58.650741   94518 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 14:15:58.689762   94518 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 14:15:58.689866   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 14:15:58.728580   94518 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 14:15:58.728663   94518 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 14:15:58.777291   94518 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 14:15:58.777320   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 14:15:58.779077   94518 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-493618" context rescaled to 1 replicas
	I1018 14:15:58.793294   94518 addons.go:479] Verifying addon registry=true in "addons-493618"
	I1018 14:15:58.795632   94518 out.go:179] * Verifying registry addon...
	I1018 14:15:58.797513   94518 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 14:15:58.802260   94518 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 14:15:58.802346   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 14:15:58.819478   94518 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 14:15:58.819580   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:15:58.840463   94518 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 14:15:58.840559   94518 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 14:15:58.884762   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 14:15:59.253579   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.21783586s)
	I1018 14:15:59.253646   94518 addons.go:479] Verifying addon ingress=true in "addons-493618"
	I1018 14:15:59.253649   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.217694877s)
	I1018 14:15:59.253724   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.215515463s)
	I1018 14:15:59.253830   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.204458532s)
	I1018 14:15:59.253862   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.203589981s)
	I1018 14:15:59.253978   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.184732075s)
	I1018 14:15:59.253955   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.201771193s)
	I1018 14:15:59.254125   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183543894s)
	I1018 14:15:59.254146   94518 addons.go:479] Verifying addon metrics-server=true in "addons-493618"
	I1018 14:15:59.254259   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.162092007s)
	I1018 14:15:59.254308   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.151427901s)
	W1018 14:15:59.254331   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:15:59.254361   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.076083963s)
	I1018 14:15:59.254360   94518 retry.go:31] will retry after 263.001722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:15:59.254285   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.158155774s)
	I1018 14:15:59.255381   94518 out.go:179] * Verifying ingress addon...
	I1018 14:15:59.256267   94518 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-493618 service yakd-dashboard -n yakd-dashboard
	
	I1018 14:15:59.258528   94518 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 14:15:59.262829   94518 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 14:15:59.262849   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 14:15:59.262881   94518 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1018 14:15:59.362679   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:15:59.517934   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:15:59.762348   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:15:59.767796   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.395673176s)
	W1018 14:15:59.767854   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 14:15:59.767878   94518 retry.go:31] will retry after 185.211057ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 14:15:59.768052   94518 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-493618"
	I1018 14:15:59.770042   94518 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 14:15:59.772172   94518 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 14:15:59.775895   94518 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 14:15:59.775932   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:15:59.862807   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:15:59.953296   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1018 14:16:00.179866   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:00.179932   94518 retry.go:31] will retry after 259.138229ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 14:16:00.261895   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:00.262066   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:00.276175   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:00.300887   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:00.439689   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:00.762081   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:00.862741   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:00.862953   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:01.262222   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:01.275838   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:01.300734   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:01.762110   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:01.862891   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:01.863056   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:02.261689   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:02.275594   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:02.300586   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:02.456467   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.50311722s)
	I1018 14:16:02.456599   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.016876084s)
	W1018 14:16:02.456633   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:02.456657   94518 retry.go:31] will retry after 555.919598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 14:16:02.761271   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:02.761679   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:02.862629   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:02.862696   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:03.013466   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:03.261821   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:03.275574   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:03.301416   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 14:16:03.558757   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:03.558796   94518 retry.go:31] will retry after 725.766019ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:03.761660   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:03.862928   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:03.862971   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:04.262257   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:04.275978   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:04.285123   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:04.301354   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:04.762331   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 14:16:04.844992   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:04.845023   94518 retry.go:31] will retry after 1.701988941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:04.862778   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:04.862875   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 14:16:05.261697   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:05.262238   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:05.275990   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:05.300734   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:05.366047   94518 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 14:16:05.366115   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:16:05.383978   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:16:05.493818   94518 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 14:16:05.506797   94518 addons.go:238] Setting addon gcp-auth=true in "addons-493618"
	I1018 14:16:05.506861   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:16:05.507286   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:16:05.523892   94518 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 14:16:05.523968   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:16:05.541453   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:16:05.636326   94518 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 14:16:05.637653   94518 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:16:05.638692   94518 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 14:16:05.638712   94518 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 14:16:05.652837   94518 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 14:16:05.652861   94518 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 14:16:05.666299   94518 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 14:16:05.666320   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 14:16:05.680085   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 14:16:05.761566   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:05.775505   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:05.801315   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:05.994641   94518 addons.go:479] Verifying addon gcp-auth=true in "addons-493618"
	I1018 14:16:05.996092   94518 out.go:179] * Verifying gcp-auth addon...
	I1018 14:16:05.998105   94518 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 14:16:06.000784   94518 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 14:16:06.000799   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:06.261679   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:06.275363   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:06.301313   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:06.501300   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:06.547370   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:06.762544   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:06.775122   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:06.801020   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:07.001387   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:07.102721   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:07.102751   94518 retry.go:31] will retry after 1.894325627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 14:16:07.261769   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:07.261821   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:07.275476   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:07.301602   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:07.501354   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:07.761681   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:07.775315   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:07.801142   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:08.000985   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:08.261664   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:08.275376   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:08.301438   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:08.501200   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:08.762331   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:08.779339   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:08.801735   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:08.997988   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:09.001098   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:09.261663   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:09.275805   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:09.300898   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:09.500718   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:09.549206   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:09.549247   94518 retry.go:31] will retry after 3.310963502s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:09.761098   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 14:16:09.761118   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:09.776183   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:09.800955   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:10.002285   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:10.261461   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:10.275203   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:10.300857   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:10.501789   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:10.762046   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:10.775575   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:10.801657   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:11.001278   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:11.261449   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:11.275212   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:11.301160   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:11.500880   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:11.761928   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:11.775663   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:11.800279   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:12.001764   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:12.261645   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:12.261934   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:12.275426   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:12.301237   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:12.501106   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:12.762500   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:12.775341   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:12.801069   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:12.861213   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:13.001726   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:13.261985   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:13.275741   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:13.300410   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 14:16:13.412655   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:13.412687   94518 retry.go:31] will retry after 2.146003967s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:13.501415   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:13.761464   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:13.775396   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:13.801074   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:14.001649   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:14.261663   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:14.275331   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:14.301036   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:14.500895   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:14.760721   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:14.762189   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:14.775457   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:14.801062   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:15.001069   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:15.261905   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:15.275163   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:15.300759   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:15.501790   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:15.558849   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:15.761297   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:15.775871   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:15.800389   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:16.001291   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:16.114482   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:16.114511   94518 retry.go:31] will retry after 5.173996473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:16.261692   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:16.275397   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:16.301389   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:16.500980   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:16.760795   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:16.762022   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:16.775519   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:16.801313   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:17.000944   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:17.261757   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:17.275325   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:17.300931   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:17.502121   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:17.761220   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:17.775796   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:17.800763   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:18.001822   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:18.261706   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:18.275401   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:18.301218   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:18.500894   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:18.761938   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:18.775652   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:18.800266   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:19.001007   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:19.261023   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:19.261757   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:19.275393   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:19.301127   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:19.500951   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:19.761787   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:19.775216   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:19.800787   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:20.001688   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:20.261951   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:20.275392   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:20.301151   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:20.501366   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:20.761599   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:20.776707   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:20.800198   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:21.001395   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:21.261329   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 14:16:21.261409   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:21.275153   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:21.289245   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:21.300476   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:21.501513   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:21.761123   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:21.775774   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:21.800635   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 14:16:21.851749   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:21.851778   94518 retry.go:31] will retry after 9.714380288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:22.001747   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:22.261813   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:22.275852   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:22.300396   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:22.501345   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:22.761740   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:22.775460   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:22.801088   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:23.000938   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:23.261494   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:23.275351   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:23.301186   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:23.501437   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:23.761231   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:23.761277   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:23.776153   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:23.800929   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:24.001798   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:24.261566   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:24.275231   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:24.300782   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:24.501826   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:24.761655   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:24.775311   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:24.801269   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:25.001202   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:25.261268   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:25.276037   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:25.300709   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:25.501743   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:25.761717   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:25.761933   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:25.775270   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:25.800968   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:26.001027   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:26.261514   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:26.275058   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:26.300235   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:26.500857   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:26.761281   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:26.775331   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:26.801253   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:27.001003   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:27.261650   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:27.275357   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:27.301224   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:27.501285   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:27.761635   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:27.775243   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:27.801161   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:28.001260   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:28.261155   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:28.261172   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:28.276267   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:28.300992   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:28.501784   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:28.761766   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:28.775549   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:28.801180   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:29.001049   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:29.261993   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:29.275515   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:29.301883   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:29.501469   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:29.761634   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:29.775146   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:29.801064   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:30.001967   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:30.261684   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:30.275382   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:30.301634   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:30.501572   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:30.761473   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:30.762048   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:30.775275   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:30.800997   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:31.000979   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:31.261897   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:31.275984   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:31.300628   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:31.501831   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:31.566932   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:31.761417   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:31.774979   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:31.800622   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:32.001291   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:32.118968   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:32.119002   94518 retry.go:31] will retry after 19.360841038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:32.261895   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:32.275779   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:32.304391   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:32.501587   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:32.761735   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:32.761898   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:32.775323   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:32.801370   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:33.001609   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:33.261584   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:33.275443   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:33.301126   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:33.501842   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:33.761935   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:33.774859   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:33.800261   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:34.001159   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:34.261227   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:34.275683   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:34.301293   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:34.501219   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:34.761634   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:34.775251   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:34.801016   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:35.002045   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:35.261059   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:35.262099   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:35.275492   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:35.301345   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:35.501646   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:35.761690   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:35.775306   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:35.800935   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:36.001009   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:36.261734   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:36.275232   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:36.300862   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:36.502157   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:36.761205   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:36.776410   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:36.801109   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:37.001783   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:37.261689   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:37.261744   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:37.275555   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:37.301669   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:37.501215   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:37.762014   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:37.775442   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:37.801110   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:38.000880   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:38.263391   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:38.275251   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:38.301068   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:38.501978   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:38.760157   94518 node_ready.go:49] node "addons-493618" is "Ready"
	I1018 14:16:38.760187   94518 node_ready.go:38] duration metric: took 40.502258296s for node "addons-493618" to be "Ready" ...
	I1018 14:16:38.760202   94518 api_server.go:52] waiting for apiserver process to appear ...
	I1018 14:16:38.760256   94518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 14:16:38.761614   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:38.775477   94518 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 14:16:38.775499   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:38.778619   94518 api_server.go:72] duration metric: took 41.111664217s to wait for apiserver process to appear ...
	I1018 14:16:38.778646   94518 api_server.go:88] waiting for apiserver healthz status ...
	I1018 14:16:38.778670   94518 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 14:16:38.782820   94518 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 14:16:38.783979   94518 api_server.go:141] control plane version: v1.34.1
	I1018 14:16:38.784055   94518 api_server.go:131] duration metric: took 5.400033ms to wait for apiserver health ...
	I1018 14:16:38.784069   94518 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 14:16:38.790511   94518 system_pods.go:59] 20 kube-system pods found
	I1018 14:16:38.790555   94518 system_pods.go:61] "amd-gpu-device-plugin-ps8fn" [212e7c35-e96b-474d-a839-a1234082db1e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:16:38.790566   94518 system_pods.go:61] "coredns-66bc5c9577-zsv4k" [f2e200e1-f869-49c9-9964-a7ce8b78fc36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:16:38.790574   94518 system_pods.go:61] "csi-hostpath-attacher-0" [3623b39a-4edb-4205-aeef-6f143a1226ca] Pending
	I1018 14:16:38.790580   94518 system_pods.go:61] "csi-hostpath-resizer-0" [d1f8fabd-93ae-47ef-9455-9609361e348d] Pending
	I1018 14:16:38.790589   94518 system_pods.go:61] "csi-hostpathplugin-t8ksl" [ed3177ab-2b66-47a9-8f89-40db56cbc332] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 14:16:38.790595   94518 system_pods.go:61] "etcd-addons-493618" [b9b0c453-cead-48a5-916f-b6743b403b4d] Running
	I1018 14:16:38.790602   94518 system_pods.go:61] "kindnet-vhk9j" [f447a047-b1f9-4b54-8f86-2bceeda6e6f2] Running
	I1018 14:16:38.790608   94518 system_pods.go:61] "kube-apiserver-addons-493618" [75f188ee-c145-4c6c-8fe4-ec8c6c92a91c] Running
	I1018 14:16:38.790613   94518 system_pods.go:61] "kube-controller-manager-addons-493618" [18b03cc0-9988-4bdf-b7a2-1c4c0a50ea7c] Running
	I1018 14:16:38.790621   94518 system_pods.go:61] "kube-ingress-dns-minikube" [d97bcf56-e215-486d-bd23-84d300a41f66] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:16:38.790626   94518 system_pods.go:61] "kube-proxy-5x2v2" [43716c06-fdd2-4f68-87f9-d90cf2f16440] Running
	I1018 14:16:38.790631   94518 system_pods.go:61] "kube-scheduler-addons-493618" [dc3fc718-9168-4264-aa9e-4ce985ac1d72] Running
	I1018 14:16:38.790638   94518 system_pods.go:61] "metrics-server-85b7d694d7-hzzlq" [93c9f378-d616-4060-a537-9060c4ce996a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:16:38.790647   94518 system_pods.go:61] "nvidia-device-plugin-daemonset-w9ks6" [0837d0f2-cef9-4ff6-b233-2547cb3c5f55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:16:38.790655   94518 system_pods.go:61] "registry-6b586f9694-pdjc2" [d5f69ebc-b615-4334-8fe1-593c6fe7f496] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:16:38.790665   94518 system_pods.go:61] "registry-creds-764b6fb674-czp24" [a3c3218a-127e-4d0d-90f6-a2b735fc7c5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:16:38.790681   94518 system_pods.go:61] "registry-proxy-dddz6" [55ecd78e-f023-433a-80aa-4a98aed74734] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:16:38.790688   94518 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8ftdc" [41ec4b8b-e0f1-4aed-a826-e4d50c52e35d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:38.790699   94518 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fcm6w" [fcafa25f-dac8-4c6c-9dee-6c155ff5f214] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:38.790706   94518 system_pods.go:61] "storage-provisioner" [0ce9ef3e-e1d3-4979-b307-1d30a38cfc5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:16:38.790714   94518 system_pods.go:74] duration metric: took 6.637048ms to wait for pod list to return data ...
	I1018 14:16:38.790727   94518 default_sa.go:34] waiting for default service account to be created ...
	I1018 14:16:38.813945   94518 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 14:16:38.813976   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:38.817277   94518 default_sa.go:45] found service account: "default"
	I1018 14:16:38.817303   94518 default_sa.go:55] duration metric: took 26.568684ms for default service account to be created ...
	I1018 14:16:38.817314   94518 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 14:16:38.836792   94518 system_pods.go:86] 20 kube-system pods found
	I1018 14:16:38.836840   94518 system_pods.go:89] "amd-gpu-device-plugin-ps8fn" [212e7c35-e96b-474d-a839-a1234082db1e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:16:38.836858   94518 system_pods.go:89] "coredns-66bc5c9577-zsv4k" [f2e200e1-f869-49c9-9964-a7ce8b78fc36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:16:38.836867   94518 system_pods.go:89] "csi-hostpath-attacher-0" [3623b39a-4edb-4205-aeef-6f143a1226ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 14:16:38.836875   94518 system_pods.go:89] "csi-hostpath-resizer-0" [d1f8fabd-93ae-47ef-9455-9609361e348d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 14:16:38.836883   94518 system_pods.go:89] "csi-hostpathplugin-t8ksl" [ed3177ab-2b66-47a9-8f89-40db56cbc332] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 14:16:38.836890   94518 system_pods.go:89] "etcd-addons-493618" [b9b0c453-cead-48a5-916f-b6743b403b4d] Running
	I1018 14:16:38.836900   94518 system_pods.go:89] "kindnet-vhk9j" [f447a047-b1f9-4b54-8f86-2bceeda6e6f2] Running
	I1018 14:16:38.836907   94518 system_pods.go:89] "kube-apiserver-addons-493618" [75f188ee-c145-4c6c-8fe4-ec8c6c92a91c] Running
	I1018 14:16:38.836927   94518 system_pods.go:89] "kube-controller-manager-addons-493618" [18b03cc0-9988-4bdf-b7a2-1c4c0a50ea7c] Running
	I1018 14:16:38.836935   94518 system_pods.go:89] "kube-ingress-dns-minikube" [d97bcf56-e215-486d-bd23-84d300a41f66] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:16:38.836944   94518 system_pods.go:89] "kube-proxy-5x2v2" [43716c06-fdd2-4f68-87f9-d90cf2f16440] Running
	I1018 14:16:38.836951   94518 system_pods.go:89] "kube-scheduler-addons-493618" [dc3fc718-9168-4264-aa9e-4ce985ac1d72] Running
	I1018 14:16:38.836958   94518 system_pods.go:89] "metrics-server-85b7d694d7-hzzlq" [93c9f378-d616-4060-a537-9060c4ce996a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:16:38.836970   94518 system_pods.go:89] "nvidia-device-plugin-daemonset-w9ks6" [0837d0f2-cef9-4ff6-b233-2547cb3c5f55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:16:38.836985   94518 system_pods.go:89] "registry-6b586f9694-pdjc2" [d5f69ebc-b615-4334-8fe1-593c6fe7f496] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:16:38.836997   94518 system_pods.go:89] "registry-creds-764b6fb674-czp24" [a3c3218a-127e-4d0d-90f6-a2b735fc7c5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:16:38.837005   94518 system_pods.go:89] "registry-proxy-dddz6" [55ecd78e-f023-433a-80aa-4a98aed74734] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:16:38.837016   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8ftdc" [41ec4b8b-e0f1-4aed-a826-e4d50c52e35d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:38.837026   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fcm6w" [fcafa25f-dac8-4c6c-9dee-6c155ff5f214] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:38.837036   94518 system_pods.go:89] "storage-provisioner" [0ce9ef3e-e1d3-4979-b307-1d30a38cfc5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:16:38.837060   94518 retry.go:31] will retry after 303.187947ms: missing components: kube-dns
	I1018 14:16:39.002953   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:39.146165   94518 system_pods.go:86] 20 kube-system pods found
	I1018 14:16:39.146209   94518 system_pods.go:89] "amd-gpu-device-plugin-ps8fn" [212e7c35-e96b-474d-a839-a1234082db1e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:16:39.146220   94518 system_pods.go:89] "coredns-66bc5c9577-zsv4k" [f2e200e1-f869-49c9-9964-a7ce8b78fc36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:16:39.146229   94518 system_pods.go:89] "csi-hostpath-attacher-0" [3623b39a-4edb-4205-aeef-6f143a1226ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 14:16:39.146237   94518 system_pods.go:89] "csi-hostpath-resizer-0" [d1f8fabd-93ae-47ef-9455-9609361e348d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 14:16:39.146245   94518 system_pods.go:89] "csi-hostpathplugin-t8ksl" [ed3177ab-2b66-47a9-8f89-40db56cbc332] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 14:16:39.146251   94518 system_pods.go:89] "etcd-addons-493618" [b9b0c453-cead-48a5-916f-b6743b403b4d] Running
	I1018 14:16:39.146257   94518 system_pods.go:89] "kindnet-vhk9j" [f447a047-b1f9-4b54-8f86-2bceeda6e6f2] Running
	I1018 14:16:39.146264   94518 system_pods.go:89] "kube-apiserver-addons-493618" [75f188ee-c145-4c6c-8fe4-ec8c6c92a91c] Running
	I1018 14:16:39.146270   94518 system_pods.go:89] "kube-controller-manager-addons-493618" [18b03cc0-9988-4bdf-b7a2-1c4c0a50ea7c] Running
	I1018 14:16:39.146285   94518 system_pods.go:89] "kube-ingress-dns-minikube" [d97bcf56-e215-486d-bd23-84d300a41f66] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:16:39.146293   94518 system_pods.go:89] "kube-proxy-5x2v2" [43716c06-fdd2-4f68-87f9-d90cf2f16440] Running
	I1018 14:16:39.146299   94518 system_pods.go:89] "kube-scheduler-addons-493618" [dc3fc718-9168-4264-aa9e-4ce985ac1d72] Running
	I1018 14:16:39.146311   94518 system_pods.go:89] "metrics-server-85b7d694d7-hzzlq" [93c9f378-d616-4060-a537-9060c4ce996a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:16:39.146320   94518 system_pods.go:89] "nvidia-device-plugin-daemonset-w9ks6" [0837d0f2-cef9-4ff6-b233-2547cb3c5f55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:16:39.146329   94518 system_pods.go:89] "registry-6b586f9694-pdjc2" [d5f69ebc-b615-4334-8fe1-593c6fe7f496] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:16:39.146342   94518 system_pods.go:89] "registry-creds-764b6fb674-czp24" [a3c3218a-127e-4d0d-90f6-a2b735fc7c5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:16:39.146354   94518 system_pods.go:89] "registry-proxy-dddz6" [55ecd78e-f023-433a-80aa-4a98aed74734] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:16:39.146362   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8ftdc" [41ec4b8b-e0f1-4aed-a826-e4d50c52e35d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:39.146372   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fcm6w" [fcafa25f-dac8-4c6c-9dee-6c155ff5f214] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:39.146381   94518 system_pods.go:89] "storage-provisioner" [0ce9ef3e-e1d3-4979-b307-1d30a38cfc5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:16:39.146407   94518 retry.go:31] will retry after 360.79099ms: missing components: kube-dns
	I1018 14:16:39.263006   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:39.276186   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:39.301149   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:39.502995   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:39.512628   94518 system_pods.go:86] 20 kube-system pods found
	I1018 14:16:39.512677   94518 system_pods.go:89] "amd-gpu-device-plugin-ps8fn" [212e7c35-e96b-474d-a839-a1234082db1e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:16:39.512690   94518 system_pods.go:89] "coredns-66bc5c9577-zsv4k" [f2e200e1-f869-49c9-9964-a7ce8b78fc36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:16:39.512702   94518 system_pods.go:89] "csi-hostpath-attacher-0" [3623b39a-4edb-4205-aeef-6f143a1226ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 14:16:39.512711   94518 system_pods.go:89] "csi-hostpath-resizer-0" [d1f8fabd-93ae-47ef-9455-9609361e348d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 14:16:39.512719   94518 system_pods.go:89] "csi-hostpathplugin-t8ksl" [ed3177ab-2b66-47a9-8f89-40db56cbc332] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 14:16:39.512726   94518 system_pods.go:89] "etcd-addons-493618" [b9b0c453-cead-48a5-916f-b6743b403b4d] Running
	I1018 14:16:39.512736   94518 system_pods.go:89] "kindnet-vhk9j" [f447a047-b1f9-4b54-8f86-2bceeda6e6f2] Running
	I1018 14:16:39.512742   94518 system_pods.go:89] "kube-apiserver-addons-493618" [75f188ee-c145-4c6c-8fe4-ec8c6c92a91c] Running
	I1018 14:16:39.512751   94518 system_pods.go:89] "kube-controller-manager-addons-493618" [18b03cc0-9988-4bdf-b7a2-1c4c0a50ea7c] Running
	I1018 14:16:39.512761   94518 system_pods.go:89] "kube-ingress-dns-minikube" [d97bcf56-e215-486d-bd23-84d300a41f66] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:16:39.512770   94518 system_pods.go:89] "kube-proxy-5x2v2" [43716c06-fdd2-4f68-87f9-d90cf2f16440] Running
	I1018 14:16:39.512776   94518 system_pods.go:89] "kube-scheduler-addons-493618" [dc3fc718-9168-4264-aa9e-4ce985ac1d72] Running
	I1018 14:16:39.512785   94518 system_pods.go:89] "metrics-server-85b7d694d7-hzzlq" [93c9f378-d616-4060-a537-9060c4ce996a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:16:39.512798   94518 system_pods.go:89] "nvidia-device-plugin-daemonset-w9ks6" [0837d0f2-cef9-4ff6-b233-2547cb3c5f55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:16:39.512809   94518 system_pods.go:89] "registry-6b586f9694-pdjc2" [d5f69ebc-b615-4334-8fe1-593c6fe7f496] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:16:39.512817   94518 system_pods.go:89] "registry-creds-764b6fb674-czp24" [a3c3218a-127e-4d0d-90f6-a2b735fc7c5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:16:39.512828   94518 system_pods.go:89] "registry-proxy-dddz6" [55ecd78e-f023-433a-80aa-4a98aed74734] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:16:39.512838   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8ftdc" [41ec4b8b-e0f1-4aed-a826-e4d50c52e35d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:39.512850   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fcm6w" [fcafa25f-dac8-4c6c-9dee-6c155ff5f214] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:39.512858   94518 system_pods.go:89] "storage-provisioner" [0ce9ef3e-e1d3-4979-b307-1d30a38cfc5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:16:39.512881   94518 retry.go:31] will retry after 432.482193ms: missing components: kube-dns
	I1018 14:16:39.762902   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:39.776402   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:39.801542   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:39.950641   94518 system_pods.go:86] 20 kube-system pods found
	I1018 14:16:39.950687   94518 system_pods.go:89] "amd-gpu-device-plugin-ps8fn" [212e7c35-e96b-474d-a839-a1234082db1e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:16:39.950695   94518 system_pods.go:89] "coredns-66bc5c9577-zsv4k" [f2e200e1-f869-49c9-9964-a7ce8b78fc36] Running
	I1018 14:16:39.950708   94518 system_pods.go:89] "csi-hostpath-attacher-0" [3623b39a-4edb-4205-aeef-6f143a1226ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 14:16:39.950716   94518 system_pods.go:89] "csi-hostpath-resizer-0" [d1f8fabd-93ae-47ef-9455-9609361e348d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 14:16:39.950726   94518 system_pods.go:89] "csi-hostpathplugin-t8ksl" [ed3177ab-2b66-47a9-8f89-40db56cbc332] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 14:16:39.950733   94518 system_pods.go:89] "etcd-addons-493618" [b9b0c453-cead-48a5-916f-b6743b403b4d] Running
	I1018 14:16:39.950743   94518 system_pods.go:89] "kindnet-vhk9j" [f447a047-b1f9-4b54-8f86-2bceeda6e6f2] Running
	I1018 14:16:39.950755   94518 system_pods.go:89] "kube-apiserver-addons-493618" [75f188ee-c145-4c6c-8fe4-ec8c6c92a91c] Running
	I1018 14:16:39.950767   94518 system_pods.go:89] "kube-controller-manager-addons-493618" [18b03cc0-9988-4bdf-b7a2-1c4c0a50ea7c] Running
	I1018 14:16:39.950776   94518 system_pods.go:89] "kube-ingress-dns-minikube" [d97bcf56-e215-486d-bd23-84d300a41f66] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:16:39.950795   94518 system_pods.go:89] "kube-proxy-5x2v2" [43716c06-fdd2-4f68-87f9-d90cf2f16440] Running
	I1018 14:16:39.950805   94518 system_pods.go:89] "kube-scheduler-addons-493618" [dc3fc718-9168-4264-aa9e-4ce985ac1d72] Running
	I1018 14:16:39.950813   94518 system_pods.go:89] "metrics-server-85b7d694d7-hzzlq" [93c9f378-d616-4060-a537-9060c4ce996a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:16:39.950825   94518 system_pods.go:89] "nvidia-device-plugin-daemonset-w9ks6" [0837d0f2-cef9-4ff6-b233-2547cb3c5f55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:16:39.950837   94518 system_pods.go:89] "registry-6b586f9694-pdjc2" [d5f69ebc-b615-4334-8fe1-593c6fe7f496] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:16:39.950844   94518 system_pods.go:89] "registry-creds-764b6fb674-czp24" [a3c3218a-127e-4d0d-90f6-a2b735fc7c5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:16:39.950855   94518 system_pods.go:89] "registry-proxy-dddz6" [55ecd78e-f023-433a-80aa-4a98aed74734] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:16:39.950864   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8ftdc" [41ec4b8b-e0f1-4aed-a826-e4d50c52e35d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:39.950878   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fcm6w" [fcafa25f-dac8-4c6c-9dee-6c155ff5f214] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:39.950883   94518 system_pods.go:89] "storage-provisioner" [0ce9ef3e-e1d3-4979-b307-1d30a38cfc5e] Running
	I1018 14:16:39.950903   94518 system_pods.go:126] duration metric: took 1.133578445s to wait for k8s-apps to be running ...
	I1018 14:16:39.950927   94518 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 14:16:39.950986   94518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 14:16:39.969681   94518 system_svc.go:56] duration metric: took 18.745966ms WaitForService to wait for kubelet
	I1018 14:16:39.969710   94518 kubeadm.go:586] duration metric: took 42.30276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 14:16:39.969733   94518 node_conditions.go:102] verifying NodePressure condition ...
	I1018 14:16:39.972886   94518 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 14:16:39.972931   94518 node_conditions.go:123] node cpu capacity is 8
	I1018 14:16:39.972952   94518 node_conditions.go:105] duration metric: took 3.212854ms to run NodePressure ...
	I1018 14:16:39.972976   94518 start.go:241] waiting for startup goroutines ...
	I1018 14:16:40.002066   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:40.262894   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:40.276088   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:40.300675   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:40.501979   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:40.762663   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:40.775357   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:40.801162   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:41.001217   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:41.263258   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:41.276712   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:41.302030   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:41.501566   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:41.763346   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:41.776428   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:41.864042   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:42.002413   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:42.261523   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:42.275424   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:42.301128   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:42.501233   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:42.762642   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:42.775674   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:42.801398   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:43.002340   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:43.262615   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:43.275813   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:43.301739   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:43.501955   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:43.762643   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:43.775323   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:43.801232   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:44.000775   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:44.262060   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:44.276189   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:44.300886   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:44.502251   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:44.764473   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:44.778601   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:44.801574   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:45.002597   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:45.262417   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:45.276012   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:45.300998   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:45.502358   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:45.762909   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:45.776217   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:45.801374   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:46.001819   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:46.262735   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:46.276581   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:46.301959   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:46.502478   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:46.762137   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:46.776205   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:46.800977   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:47.002011   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:47.263363   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:47.275692   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:47.301849   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:47.502303   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:47.762097   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:47.776163   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:47.801288   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:48.001490   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:48.261703   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:48.276059   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:48.301046   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:48.501699   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:48.762014   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:48.776050   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:48.801136   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:49.003122   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:49.262958   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:49.276638   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:49.301711   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:49.504298   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:49.762891   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:49.776580   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:49.801807   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:50.002042   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:50.262618   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:50.275672   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:50.301314   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:50.501039   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:50.762127   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:50.775584   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:50.801981   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:51.002167   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:51.263088   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:51.276354   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:51.301136   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:51.480427   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:51.502052   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:51.762057   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:51.775898   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:51.801122   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:52.000897   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:52.028927   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:52.028967   94518 retry.go:31] will retry after 23.730297472s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:52.262296   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:52.276403   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:52.301168   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:52.502234   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:52.762724   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:52.776030   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:52.800809   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:53.002194   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:53.263147   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:53.276322   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:53.301440   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:53.501640   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:53.762159   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:53.780927   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:53.801573   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:54.001940   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:54.262129   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:54.275901   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:54.300784   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:54.502117   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:54.762236   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:54.863504   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:54.863546   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:55.001421   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:55.263239   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:55.276598   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:55.301592   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:55.502021   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:55.762642   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:55.775215   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:55.801168   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:56.001789   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:56.262562   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:56.276012   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:56.301105   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:56.501757   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:56.762498   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:56.842533   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:56.842884   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:57.002277   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:57.263015   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:57.275626   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:57.301290   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:57.501174   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:57.764069   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:57.777805   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:57.802024   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:58.001971   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:58.262456   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:58.276292   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:58.301340   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:58.501658   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:58.763184   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:58.776640   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:58.801759   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:59.002068   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:59.275369   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:59.276620   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:59.301023   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:59.501710   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:59.763756   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:59.865186   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:59.865222   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:00.002706   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:00.265539   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:00.279599   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:00.301880   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:00.502335   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:00.763538   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:00.775930   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:00.801897   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:01.002519   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:01.262026   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:01.276130   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:01.362572   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:01.501369   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:01.763644   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:01.779108   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:01.801020   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:02.001535   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:02.262634   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:02.276612   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:02.303963   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:02.501305   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:02.762496   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:02.776181   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:02.801068   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:03.002743   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:03.262111   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:03.276934   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:03.300828   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:03.504229   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:03.763691   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:03.776119   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:03.800631   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:04.003713   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:04.262687   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:04.276482   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:04.301743   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:04.502068   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:04.763078   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:04.776689   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:04.802101   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:05.001886   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:05.262410   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:05.276337   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:05.307319   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:05.501644   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:05.762053   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:05.776369   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:05.801797   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:06.002447   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:06.262193   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:06.275849   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:06.302174   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:06.502353   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:06.762956   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:06.776611   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:06.801155   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:07.001449   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:07.262841   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:07.276120   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:07.301192   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:07.502865   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:07.762883   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:07.776486   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:07.801984   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:08.002204   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:08.262684   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:08.275841   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:08.300609   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:08.501552   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:08.761868   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:08.777284   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:08.801575   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:09.002088   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:09.262321   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:09.275116   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:09.300794   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:09.502103   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:09.763105   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:09.775593   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:09.802027   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:10.002530   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:10.262721   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:10.363567   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:10.363604   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:10.501248   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:10.762594   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:10.775272   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:10.828298   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:11.002160   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:11.262832   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:11.275989   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:11.300855   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:11.504707   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:11.762245   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:11.776332   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:11.801408   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:12.002170   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:12.262266   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:12.276626   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:12.301680   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:12.502059   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:12.762293   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:12.776456   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:12.801320   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:13.001785   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:13.262871   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:13.276298   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:13.302882   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:13.503814   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:13.762416   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:13.844416   94518 kapi.go:107] duration metric: took 1m15.046903502s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 14:17:13.845081   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:14.002739   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:14.262420   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:14.276625   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:14.501876   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:14.763082   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:14.776373   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:15.002215   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:15.262541   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:15.275951   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:15.503027   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:15.759384   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:17:15.762301   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:15.776692   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:16.002515   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:16.262732   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:16.275795   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 14:17:16.451253   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:17:16.451302   94518 retry.go:31] will retry after 39.128992898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:17:16.501604   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:16.763396   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:16.775487   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:17.004186   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:17.262984   94518 kapi.go:107] duration metric: took 1m18.00445624s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 14:17:17.276176   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:17.501480   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:17.776270   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:18.002634   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:18.276586   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:18.501658   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:18.776313   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:19.001775   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:19.276193   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:19.502728   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:19.776495   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:20.000907   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:20.276522   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:20.501176   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:20.775718   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:21.002256   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:21.276110   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:21.502718   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:21.776475   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:22.001349   94518 kapi.go:107] duration metric: took 1m16.00324245s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 14:17:22.003029   94518 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-493618 cluster.
	I1018 14:17:22.004220   94518 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 14:17:22.005269   94518 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 14:17:22.276180   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:22.776487   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:23.276181   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:23.777075   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:24.308479   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:24.777192   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:25.275835   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:25.777029   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:26.276794   94518 kapi.go:107] duration metric: took 1m26.504622464s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 14:17:55.584930   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 14:17:56.123857   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 14:17:56.124019   94518 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 14:17:56.126955   94518 out.go:179] * Enabled addons: storage-provisioner, cloud-spanner, registry-creds, ingress-dns, metrics-server, nvidia-device-plugin, amd-gpu-device-plugin, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1018 14:17:56.127992   94518 addons.go:514] duration metric: took 1m58.460970758s for enable addons: enabled=[storage-provisioner cloud-spanner registry-creds ingress-dns metrics-server nvidia-device-plugin amd-gpu-device-plugin yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1018 14:17:56.128052   94518 start.go:246] waiting for cluster config update ...
	I1018 14:17:56.128083   94518 start.go:255] writing updated cluster config ...
	I1018 14:17:56.128406   94518 ssh_runner.go:195] Run: rm -f paused
	I1018 14:17:56.132411   94518 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 14:17:56.136263   94518 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zsv4k" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.140509   94518 pod_ready.go:94] pod "coredns-66bc5c9577-zsv4k" is "Ready"
	I1018 14:17:56.140532   94518 pod_ready.go:86] duration metric: took 4.248281ms for pod "coredns-66bc5c9577-zsv4k" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.142491   94518 pod_ready.go:83] waiting for pod "etcd-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.146289   94518 pod_ready.go:94] pod "etcd-addons-493618" is "Ready"
	I1018 14:17:56.146311   94518 pod_ready.go:86] duration metric: took 3.8003ms for pod "etcd-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.148001   94518 pod_ready.go:83] waiting for pod "kube-apiserver-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.151493   94518 pod_ready.go:94] pod "kube-apiserver-addons-493618" is "Ready"
	I1018 14:17:56.151516   94518 pod_ready.go:86] duration metric: took 3.485308ms for pod "kube-apiserver-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.153295   94518 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.536543   94518 pod_ready.go:94] pod "kube-controller-manager-addons-493618" is "Ready"
	I1018 14:17:56.536571   94518 pod_ready.go:86] duration metric: took 383.254622ms for pod "kube-controller-manager-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.736793   94518 pod_ready.go:83] waiting for pod "kube-proxy-5x2v2" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:57.136427   94518 pod_ready.go:94] pod "kube-proxy-5x2v2" is "Ready"
	I1018 14:17:57.136456   94518 pod_ready.go:86] duration metric: took 399.638474ms for pod "kube-proxy-5x2v2" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:57.336271   94518 pod_ready.go:83] waiting for pod "kube-scheduler-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:57.736585   94518 pod_ready.go:94] pod "kube-scheduler-addons-493618" is "Ready"
	I1018 14:17:57.736613   94518 pod_ready.go:86] duration metric: took 400.31858ms for pod "kube-scheduler-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:57.736623   94518 pod_ready.go:40] duration metric: took 1.604180528s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 14:17:57.782211   94518 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 14:17:57.783876   94518 out.go:179] * Done! kubectl is now configured to use "addons-493618" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 14:18:57 addons-493618 crio[781]: time="2025-10-18T14:18:57.089773281Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=cd76a5af-fbbf-4735-bf06-fe1e4c2ca7b1 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:18:57 addons-493618 crio[781]: time="2025-10-18T14:18:57.090995338Z" level=info msg="Pulling image: docker.io/nginx:latest" id=c0369e3d-0ef1-4446-84fa-5d6c77e81c18 name=/runtime.v1.ImageService/PullImage
	Oct 18 14:18:57 addons-493618 crio[781]: time="2025-10-18T14:18:57.092691017Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 18 14:18:57 addons-493618 crio[781]: time="2025-10-18T14:18:57.122634244Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=c67edf23-6e37-4cd0-95b9-4c095902fea3 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:18:57 addons-493618 crio[781]: time="2025-10-18T14:18:57.126596234Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-czp24/registry-creds" id=ccf839b5-d128-4efd-a967-307058e56c7d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 14:18:57 addons-493618 crio[781]: time="2025-10-18T14:18:57.127367141Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 14:18:57 addons-493618 crio[781]: time="2025-10-18T14:18:57.132703053Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 14:18:57 addons-493618 crio[781]: time="2025-10-18T14:18:57.133157049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 14:18:57 addons-493618 crio[781]: time="2025-10-18T14:18:57.172143722Z" level=info msg="Created container a2d90a4bb564c43991d5a0c84c81880730aa5a76930e356ff3a20d99954e1b06: kube-system/registry-creds-764b6fb674-czp24/registry-creds" id=ccf839b5-d128-4efd-a967-307058e56c7d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 14:18:57 addons-493618 crio[781]: time="2025-10-18T14:18:57.172779036Z" level=info msg="Starting container: a2d90a4bb564c43991d5a0c84c81880730aa5a76930e356ff3a20d99954e1b06" id=6d45488b-7088-4a06-bd2f-102051d01dfa name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 14:18:57 addons-493618 crio[781]: time="2025-10-18T14:18:57.174543571Z" level=info msg="Started container" PID=8885 containerID=a2d90a4bb564c43991d5a0c84c81880730aa5a76930e356ff3a20d99954e1b06 description=kube-system/registry-creds-764b6fb674-czp24/registry-creds id=6d45488b-7088-4a06-bd2f-102051d01dfa name=/runtime.v1.RuntimeService/StartContainer sandboxID=c7208d1abb3e09e7c080c8d548195632fe914a5c5324d6cae0a97d380e4d4cec
	Oct 18 14:19:28 addons-493618 crio[781]: time="2025-10-18T14:19:28.431251756Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 18 14:20:13 addons-493618 crio[781]: time="2025-10-18T14:20:13.531052148Z" level=info msg="Pulling image: docker.io/nginx:latest" id=4058fea4-df44-480f-8ed6-99a3f9fbfc3f name=/runtime.v1.ImageService/PullImage
	Oct 18 14:20:13 addons-493618 crio[781]: time="2025-10-18T14:20:13.534897385Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 18 14:20:36 addons-493618 crio[781]: time="2025-10-18T14:20:36.345285259Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-9gb5k/POD" id=d6924b80-8b7e-407f-8ae7-6307549cdc8d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 14:20:36 addons-493618 crio[781]: time="2025-10-18T14:20:36.345435181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 14:20:36 addons-493618 crio[781]: time="2025-10-18T14:20:36.351808175Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-9gb5k Namespace:default ID:800c5d1d791d2af5da330c1a64a93a8bf5eb286154a81d7108d1511b3ee1d191 UID:d9bf04c9-933f-480e-a7d0-77e9398aab3c NetNS:/var/run/netns/1c1cfe23-b334-463b-bf22-f87275e24dca Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000f2ede0}] Aliases:map[]}"
	Oct 18 14:20:36 addons-493618 crio[781]: time="2025-10-18T14:20:36.351840523Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-9gb5k to CNI network \"kindnet\" (type=ptp)"
	Oct 18 14:20:36 addons-493618 crio[781]: time="2025-10-18T14:20:36.362328168Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-9gb5k Namespace:default ID:800c5d1d791d2af5da330c1a64a93a8bf5eb286154a81d7108d1511b3ee1d191 UID:d9bf04c9-933f-480e-a7d0-77e9398aab3c NetNS:/var/run/netns/1c1cfe23-b334-463b-bf22-f87275e24dca Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000f2ede0}] Aliases:map[]}"
	Oct 18 14:20:36 addons-493618 crio[781]: time="2025-10-18T14:20:36.36244824Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-9gb5k for CNI network kindnet (type=ptp)"
	Oct 18 14:20:36 addons-493618 crio[781]: time="2025-10-18T14:20:36.364163837Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 14:20:36 addons-493618 crio[781]: time="2025-10-18T14:20:36.365768335Z" level=info msg="Ran pod sandbox 800c5d1d791d2af5da330c1a64a93a8bf5eb286154a81d7108d1511b3ee1d191 with infra container: default/hello-world-app-5d498dc89-9gb5k/POD" id=d6924b80-8b7e-407f-8ae7-6307549cdc8d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 14:20:36 addons-493618 crio[781]: time="2025-10-18T14:20:36.367159625Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=0f2c6f95-d90c-401c-8670-6021e090edd7 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:20:36 addons-493618 crio[781]: time="2025-10-18T14:20:36.36729658Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=0f2c6f95-d90c-401c-8670-6021e090edd7 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:20:36 addons-493618 crio[781]: time="2025-10-18T14:20:36.367340925Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=0f2c6f95-d90c-401c-8670-6021e090edd7 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	a2d90a4bb564c       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   c7208d1abb3e0       registry-creds-764b6fb674-czp24             kube-system
	38eab5508e267       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              2 minutes ago        Running             nginx                                    0                   d9445a77ecd5a       nginx                                       default
	f0ed3f5d6ffa8       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   097945ff6ffef       busybox                                     default
	fcb7161ee1d1b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          3 minutes ago        Running             csi-snapshotter                          0                   d3d06bcc5d099       csi-hostpathplugin-t8ksl                    kube-system
	8f357a51c6b5d       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago        Running             csi-provisioner                          0                   d3d06bcc5d099       csi-hostpathplugin-t8ksl                    kube-system
	530e145d6c2e0       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago        Running             liveness-probe                           0                   d3d06bcc5d099       csi-hostpathplugin-t8ksl                    kube-system
	84cd4c11831db       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago        Running             hostpath                                 0                   d3d06bcc5d099       csi-hostpathplugin-t8ksl                    kube-system
	edfb43ced2e1e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago        Running             gcp-auth                                 0                   0c4aa9fe754c5       gcp-auth-78565c9fb4-mwgsp                   gcp-auth
	fcf3c24788988       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago        Running             gadget                                   0                   0c73b5d5a20a9       gadget-vm8lx                                gadget
	10ae25ecd1d90       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago        Running             node-driver-registrar                    0                   d3d06bcc5d099       csi-hostpathplugin-t8ksl                    kube-system
	45501fab46f05       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             3 minutes ago        Running             controller                               0                   3e90b0db82f21       ingress-nginx-controller-675c5ddd98-sndwh   ingress-nginx
	50a19f5b596d4       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   5ce9bbd315430       registry-proxy-dddz6                        kube-system
	859d5d72eef12       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   ce015c134568b       amd-gpu-device-plugin-ps8fn                 kube-system
	78aea4ac76ed2       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   d601227de066c       nvidia-device-plugin-daemonset-w9ks6        kube-system
	775733aea8bf0       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   d3d06bcc5d099       csi-hostpathplugin-t8ksl                    kube-system
	32ea63c74de31       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   06ef25b517353       yakd-dashboard-5ff678cb9-cqgkj              yakd-dashboard
	6673efa077656       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   dec41ec76cd03       csi-hostpath-resizer-0                      kube-system
	89679d50a3910       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   0048d743f42d1       csi-hostpath-attacher-0                     kube-system
	c52d44cde4f71       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   b534c52d0c84c       snapshot-controller-7d9fbc56b8-fcm6w        kube-system
	6883ad86fcecd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago        Exited              patch                                    0                   a08859b82414b       ingress-nginx-admission-patch-vxb5f         ingress-nginx
	a9e1fbf487f51       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago        Running             cloud-spanner-emulator                   0                   69532574c7971       cloud-spanner-emulator-86bd5cbb97-2nxxs     default
	8e896cc7ee32d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago        Exited              create                                   0                   f011cb8ba518a       ingress-nginx-admission-create-tnv6j        ingress-nginx
	92ceaca691f51       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   2ee75d4e4001f       snapshot-controller-7d9fbc56b8-8ftdc        kube-system
	da0ddb2d0550b       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   c8aaf317eece5       kube-ingress-dns-minikube                   kube-system
	79474cdc2efcd       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   89516e7730f54       local-path-provisioner-648f6765c9-xgggg     local-path-storage
	a51f3eea29502       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   97f317fc1b5dc       registry-6b586f9694-pdjc2                   kube-system
	ca1869e801d6e       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   8f3ce70811032       metrics-server-85b7d694d7-hzzlq             kube-system
	7fc1c430e912b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   4107d196d2062       coredns-66bc5c9577-zsv4k                    kube-system
	d41651660ae84       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   ffc42416a6b3e       storage-provisioner                         kube-system
	778f4f35207fc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago        Running             kindnet-cni                              0                   5b6cacbfc954b       kindnet-vhk9j                               kube-system
	fc19fe3563e01       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago        Running             kube-proxy                               0                   ff4d1c0bbd1d6       kube-proxy-5x2v2                            kube-system
	f616a2d4df678       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago        Running             kube-apiserver                           0                   9bbc44f90a4b5       kube-apiserver-addons-493618                kube-system
	411a5716e9150       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago        Running             etcd                                     0                   56968af9a8607       etcd-addons-493618                          kube-system
	857014c2e77ee       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago        Running             kube-scheduler                           0                   3e0b656b74b60       kube-scheduler-addons-493618                kube-system
	aa8c1cbd9ac9c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago        Running             kube-controller-manager                  0                   a4c04910854cf       kube-controller-manager-addons-493618       kube-system
	
	
	==> coredns [7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca] <==
	[INFO] 10.244.0.22:57612 - 51632 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004894861s
	[INFO] 10.244.0.22:33801 - 2038 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004753797s
	[INFO] 10.244.0.22:51344 - 53286 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006885147s
	[INFO] 10.244.0.22:52656 - 1987 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005330183s
	[INFO] 10.244.0.22:38256 - 15835 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006395765s
	[INFO] 10.244.0.22:55111 - 46405 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000941313s
	[INFO] 10.244.0.22:46598 - 43189 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001357914s
	[INFO] 10.244.0.25:43143 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000242704s
	[INFO] 10.244.0.25:52167 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000135696s
	[INFO] 10.244.0.30:49100 - 7420 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000210696s
	[INFO] 10.244.0.30:41835 - 48939 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000294728s
	[INFO] 10.244.0.30:37353 - 34092 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000139238s
	[INFO] 10.244.0.30:42046 - 32094 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000169259s
	[INFO] 10.244.0.30:59297 - 43574 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000118194s
	[INFO] 10.244.0.30:50652 - 61592 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.00009496s
	[INFO] 10.244.0.30:60179 - 658 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003225928s
	[INFO] 10.244.0.30:33698 - 1453 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003300098s
	[INFO] 10.244.0.30:39843 - 62611 "A IN accounts.google.com.us-west1-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.004203955s
	[INFO] 10.244.0.30:50588 - 64448 "AAAA IN accounts.google.com.us-west1-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.004209654s
	[INFO] 10.244.0.30:41672 - 10869 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.003896743s
	[INFO] 10.244.0.30:54897 - 22710 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004966178s
	[INFO] 10.244.0.30:59095 - 64855 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004240864s
	[INFO] 10.244.0.30:47639 - 16831 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004256795s
	[INFO] 10.244.0.30:38330 - 23159 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.00155779s
	[INFO] 10.244.0.30:51314 - 22059 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001569376s
	
	
	==> describe nodes <==
	Name:               addons-493618
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-493618
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=addons-493618
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T14_15_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-493618
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-493618"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 14:15:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-493618
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 14:20:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 14:19:56 +0000   Sat, 18 Oct 2025 14:15:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 14:19:56 +0000   Sat, 18 Oct 2025 14:15:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 14:19:56 +0000   Sat, 18 Oct 2025 14:15:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 14:19:56 +0000   Sat, 18 Oct 2025 14:16:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-493618
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                c99ec94e-dad8-466b-986d-f557d98b8e1c
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (30 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m39s
	  default                     cloud-spanner-emulator-86bd5cbb97-2nxxs      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  default                     hello-world-app-5d498dc89-9gb5k              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  default                     task-pv-pod-restore                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  gadget                      gadget-vm8lx                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  gcp-auth                    gcp-auth-78565c9fb4-mwgsp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-sndwh    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m38s
	  kube-system                 amd-gpu-device-plugin-ps8fn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 coredns-66bc5c9577-zsv4k                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m40s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 csi-hostpathplugin-t8ksl                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 etcd-addons-493618                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m46s
	  kube-system                 kindnet-vhk9j                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m40s
	  kube-system                 kube-apiserver-addons-493618                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-controller-manager-addons-493618        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 kube-proxy-5x2v2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-scheduler-addons-493618                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 metrics-server-85b7d694d7-hzzlq              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m39s
	  kube-system                 nvidia-device-plugin-daemonset-w9ks6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 registry-6b586f9694-pdjc2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 registry-creds-764b6fb674-czp24              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 registry-proxy-dddz6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 snapshot-controller-7d9fbc56b8-8ftdc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 snapshot-controller-7d9fbc56b8-fcm6w         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  local-path-storage          local-path-provisioner-648f6765c9-xgggg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-cqgkj               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m39s  kube-proxy       
	  Normal  Starting                 4m46s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m46s  kubelet          Node addons-493618 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s  kubelet          Node addons-493618 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s  kubelet          Node addons-493618 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m41s  node-controller  Node addons-493618 event: Registered Node addons-493618 in Controller
	  Normal  NodeReady                3m59s  kubelet          Node addons-493618 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.096767] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026410] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.055938] kauditd_printk_skb: 47 callbacks suppressed
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5] <==
	{"level":"warn","ts":"2025-10-18T14:15:48.524192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.530308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.536786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.546053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.559657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.566802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.575632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.584037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.591784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.605020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.612481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.619606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.634187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.637964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.644321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.650704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.695116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:16:00.196257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:16:00.202493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:16:26.281250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:16:26.287738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:16:26.308478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:16:26.315202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39426","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T14:16:59.273487Z","caller":"traceutil/trace.go:172","msg":"trace[603722411] transaction","detail":"{read_only:false; response_revision:1051; number_of_response:1; }","duration":"100.664551ms","start":"2025-10-18T14:16:59.172784Z","end":"2025-10-18T14:16:59.273449Z","steps":["trace[603722411] 'process raft request'  (duration: 100.381339ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:17:24.306442Z","caller":"traceutil/trace.go:172","msg":"trace[1562610933] transaction","detail":"{read_only:false; response_revision:1203; number_of_response:1; }","duration":"100.640382ms","start":"2025-10-18T14:17:24.205781Z","end":"2025-10-18T14:17:24.306422Z","steps":["trace[1562610933] 'process raft request'  (duration: 64.205106ms)","trace[1562610933] 'compare'  (duration: 36.281867ms)"],"step_count":2}
	
	
	==> gcp-auth [edfb43ced2e1e4c4fbb178805c38e20bf5073a4864e99ecf580aa951e010b54f] <==
	2025/10/18 14:17:21 GCP Auth Webhook started!
	2025/10/18 14:17:58 Ready to marshal response ...
	2025/10/18 14:17:58 Ready to write response ...
	2025/10/18 14:17:58 Ready to marshal response ...
	2025/10/18 14:17:58 Ready to write response ...
	2025/10/18 14:17:58 Ready to marshal response ...
	2025/10/18 14:17:58 Ready to write response ...
	2025/10/18 14:18:12 Ready to marshal response ...
	2025/10/18 14:18:12 Ready to write response ...
	2025/10/18 14:18:16 Ready to marshal response ...
	2025/10/18 14:18:16 Ready to write response ...
	2025/10/18 14:18:20 Ready to marshal response ...
	2025/10/18 14:18:20 Ready to write response ...
	2025/10/18 14:18:20 Ready to marshal response ...
	2025/10/18 14:18:20 Ready to write response ...
	2025/10/18 14:18:23 Ready to marshal response ...
	2025/10/18 14:18:23 Ready to write response ...
	2025/10/18 14:18:31 Ready to marshal response ...
	2025/10/18 14:18:31 Ready to write response ...
	2025/10/18 14:18:55 Ready to marshal response ...
	2025/10/18 14:18:55 Ready to write response ...
	2025/10/18 14:20:36 Ready to marshal response ...
	2025/10/18 14:20:36 Ready to write response ...
	
	
	==> kernel <==
	 14:20:37 up  2:03,  0 user,  load average: 0.27, 1.66, 2.47
	Linux addons-493618 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750] <==
	I1018 14:18:28.061240       1 main.go:301] handling current node
	I1018 14:18:38.061185       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:18:38.061243       1 main.go:301] handling current node
	I1018 14:18:48.061062       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:18:48.061093       1 main.go:301] handling current node
	I1018 14:18:58.093089       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:18:58.093139       1 main.go:301] handling current node
	I1018 14:19:08.060515       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:19:08.060553       1 main.go:301] handling current node
	I1018 14:19:18.060816       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:19:18.060862       1 main.go:301] handling current node
	I1018 14:19:28.060901       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:19:28.060956       1 main.go:301] handling current node
	I1018 14:19:38.060804       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:19:38.060835       1 main.go:301] handling current node
	I1018 14:19:48.061245       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:19:48.061275       1 main.go:301] handling current node
	I1018 14:19:58.092671       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:19:58.092704       1 main.go:301] handling current node
	I1018 14:20:08.061316       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:20:08.061349       1 main.go:301] handling current node
	I1018 14:20:18.061050       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:20:18.061082       1 main.go:301] handling current node
	I1018 14:20:28.060713       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:20:28.060746       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4] <==
	W1018 14:16:26.315041       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 14:16:38.576682       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.149.8:443: connect: connection refused
	E1018 14:16:38.576731       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.149.8:443: connect: connection refused" logger="UnhandledError"
	W1018 14:16:38.576868       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.149.8:443: connect: connection refused
	E1018 14:16:38.576902       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.149.8:443: connect: connection refused" logger="UnhandledError"
	W1018 14:16:38.600334       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.149.8:443: connect: connection refused
	E1018 14:16:38.600374       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.149.8:443: connect: connection refused" logger="UnhandledError"
	W1018 14:16:38.600902       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.149.8:443: connect: connection refused
	E1018 14:16:38.600965       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.149.8:443: connect: connection refused" logger="UnhandledError"
	E1018 14:16:41.703457       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.161.37:443: connect: connection refused" logger="UnhandledError"
	W1018 14:16:41.703665       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 14:16:41.703731       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1018 14:16:41.704079       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.161.37:443: connect: connection refused" logger="UnhandledError"
	E1018 14:16:41.709516       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.161.37:443: connect: connection refused" logger="UnhandledError"
	E1018 14:16:41.731124       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.161.37:443: connect: connection refused" logger="UnhandledError"
	I1018 14:16:41.803282       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 14:18:06.446462       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36446: use of closed network connection
	E1018 14:18:06.603755       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36468: use of closed network connection
	I1018 14:18:12.402027       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 14:18:12.584964       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.131.156"}
	I1018 14:18:34.708350       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1018 14:20:36.111613       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.185.2"}
	
	
	==> kube-controller-manager [aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8] <==
	I1018 14:15:56.264599       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 14:15:56.264698       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 14:15:56.265922       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 14:15:56.268232       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 14:15:56.268288       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 14:15:56.268335       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 14:15:56.268348       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 14:15:56.268355       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 14:15:56.268387       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 14:15:56.269609       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:15:56.269629       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 14:15:56.269638       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 14:15:56.269971       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 14:15:56.275422       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-493618" podCIDRs=["10.244.0.0/24"]
	I1018 14:15:56.277385       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 14:15:56.289378       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 14:15:58.850088       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1018 14:16:26.274934       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 14:16:26.275118       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 14:16:26.275191       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 14:16:26.299136       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 14:16:26.302741       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 14:16:26.376108       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 14:16:26.403598       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:16:41.219427       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa] <==
	I1018 14:15:57.532244       1 server_linux.go:53] "Using iptables proxy"
	I1018 14:15:57.592753       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 14:15:57.697045       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:15:57.697101       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 14:15:57.697216       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:15:57.841695       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 14:15:57.841901       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:15:57.911876       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:15:57.922658       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:15:57.939484       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:15:57.952373       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:15:57.952400       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:15:57.952456       1 config.go:200] "Starting service config controller"
	I1018 14:15:57.952467       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:15:57.952500       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:15:57.952508       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:15:57.954225       1 config.go:309] "Starting node config controller"
	I1018 14:15:57.954269       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:15:57.954278       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:15:58.053620       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 14:15:58.053669       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:15:58.053697       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1] <==
	E1018 14:15:49.134247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:15:49.134258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:15:49.134330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 14:15:49.134307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:15:49.134338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 14:15:49.134328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 14:15:49.134351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:15:49.134453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:15:49.134460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 14:15:49.946543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 14:15:49.998890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:15:50.032174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:15:50.063609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:15:50.072057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 14:15:50.134634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 14:15:50.154988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:15:50.166165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 14:15:50.179329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 14:15:50.235814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 14:15:50.269111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:15:50.270159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:15:50.295510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 14:15:50.353863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 14:15:50.392021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1018 14:15:52.930460       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 14:18:36 addons-493618 kubelet[1280]: I1018 14:18:36.207148    1280 scope.go:117] "RemoveContainer" containerID="8690339bee982db70d5989f0033cef5ced1f1093f2c7323db05febf2f27ba29d"
	Oct 18 14:18:36 addons-493618 kubelet[1280]: E1018 14:18:36.207595    1280 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8690339bee982db70d5989f0033cef5ced1f1093f2c7323db05febf2f27ba29d\": container with ID starting with 8690339bee982db70d5989f0033cef5ced1f1093f2c7323db05febf2f27ba29d not found: ID does not exist" containerID="8690339bee982db70d5989f0033cef5ced1f1093f2c7323db05febf2f27ba29d"
	Oct 18 14:18:36 addons-493618 kubelet[1280]: I1018 14:18:36.207663    1280 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8690339bee982db70d5989f0033cef5ced1f1093f2c7323db05febf2f27ba29d"} err="failed to get container status \"8690339bee982db70d5989f0033cef5ced1f1093f2c7323db05febf2f27ba29d\": rpc error: code = NotFound desc = could not find container \"8690339bee982db70d5989f0033cef5ced1f1093f2c7323db05febf2f27ba29d\": container with ID starting with 8690339bee982db70d5989f0033cef5ced1f1093f2c7323db05febf2f27ba29d not found: ID does not exist"
	Oct 18 14:18:36 addons-493618 kubelet[1280]: I1018 14:18:36.254479    1280 reconciler_common.go:299] "Volume detached for volume \"pvc-0953538b-a101-4904-818a-9c18918f1219\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^4e3532eb-ac2d-11f0-b04d-5af7a071350c\") on node \"addons-493618\" DevicePath \"\""
	Oct 18 14:18:37 addons-493618 kubelet[1280]: I1018 14:18:37.531972    1280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="945e23ea-7ddc-4163-8149-734254930996" path="/var/lib/kubelet/pods/945e23ea-7ddc-4163-8149-734254930996/volumes"
	Oct 18 14:18:39 addons-493618 kubelet[1280]: I1018 14:18:39.528339    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ps8fn" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:18:40 addons-493618 kubelet[1280]: I1018 14:18:40.528294    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-dddz6" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:18:41 addons-493618 kubelet[1280]: E1018 14:18:41.593927    1280 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-czp24" podUID="a3c3218a-127e-4d0d-90f6-a2b735fc7c5c"
	Oct 18 14:18:51 addons-493618 kubelet[1280]: I1018 14:18:51.546720    1280 scope.go:117] "RemoveContainer" containerID="5e8ca9f4560b6be6ada017859f3fe1102005e8ff04c84720164dbe72d7d2f6a3"
	Oct 18 14:18:51 addons-493618 kubelet[1280]: I1018 14:18:51.554934    1280 scope.go:117] "RemoveContainer" containerID="353d25bde33e6aafe8bb0fa464cc1c56c5fe98c62a9a86ca184151677988f463"
	Oct 18 14:18:55 addons-493618 kubelet[1280]: I1018 14:18:55.788333    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-51509358-ae73-4c48-a8f0-ce7639f0b163\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^612b1541-ac2d-11f0-b04d-5af7a071350c\") pod \"task-pv-pod-restore\" (UID: \"55a04f24-70ab-4ed9-9957-f15ef2c7f034\") " pod="default/task-pv-pod-restore"
	Oct 18 14:18:55 addons-493618 kubelet[1280]: I1018 14:18:55.788407    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwrqd\" (UniqueName: \"kubernetes.io/projected/55a04f24-70ab-4ed9-9957-f15ef2c7f034-kube-api-access-lwrqd\") pod \"task-pv-pod-restore\" (UID: \"55a04f24-70ab-4ed9-9957-f15ef2c7f034\") " pod="default/task-pv-pod-restore"
	Oct 18 14:18:55 addons-493618 kubelet[1280]: I1018 14:18:55.788463    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/55a04f24-70ab-4ed9-9957-f15ef2c7f034-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"55a04f24-70ab-4ed9-9957-f15ef2c7f034\") " pod="default/task-pv-pod-restore"
	Oct 18 14:18:55 addons-493618 kubelet[1280]: I1018 14:18:55.894966    1280 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-51509358-ae73-4c48-a8f0-ce7639f0b163\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^612b1541-ac2d-11f0-b04d-5af7a071350c\") pod \"task-pv-pod-restore\" (UID: \"55a04f24-70ab-4ed9-9957-f15ef2c7f034\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/80d42703353eb2e74134fbada772a9951806386fd7b821153757a3e754cb76ff/globalmount\"" pod="default/task-pv-pod-restore"
	Oct 18 14:18:57 addons-493618 kubelet[1280]: I1018 14:18:57.293400    1280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-czp24" podStartSLOduration=177.756066529 podStartE2EDuration="2m59.293380168s" podCreationTimestamp="2025-10-18 14:15:58 +0000 UTC" firstStartedPulling="2025-10-18 14:18:55.553428913 +0000 UTC m=+184.111681904" lastFinishedPulling="2025-10-18 14:18:57.090742566 +0000 UTC m=+185.648995543" observedRunningTime="2025-10-18 14:18:57.293146276 +0000 UTC m=+185.851399298" watchObservedRunningTime="2025-10-18 14:18:57.293380168 +0000 UTC m=+185.851633184"
	Oct 18 14:19:43 addons-493618 kubelet[1280]: I1018 14:19:43.528049    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-w9ks6" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:19:59 addons-493618 kubelet[1280]: E1018 14:19:59.764104    1280 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 18 14:19:59 addons-493618 kubelet[1280]: E1018 14:19:59.764177    1280 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 18 14:19:59 addons-493618 kubelet[1280]: E1018 14:19:59.764304    1280 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod-restore_default(55a04f24-70ab-4ed9-9957-f15ef2c7f034): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 14:19:59 addons-493618 kubelet[1280]: E1018 14:19:59.764366    1280 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod-restore" podUID="55a04f24-70ab-4ed9-9957-f15ef2c7f034"
	Oct 18 14:20:00 addons-493618 kubelet[1280]: I1018 14:20:00.527865    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-dddz6" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:20:00 addons-493618 kubelet[1280]: E1018 14:20:00.532348    1280 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod-restore" podUID="55a04f24-70ab-4ed9-9957-f15ef2c7f034"
	Oct 18 14:20:05 addons-493618 kubelet[1280]: I1018 14:20:05.528047    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ps8fn" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:20:36 addons-493618 kubelet[1280]: I1018 14:20:36.102302    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d9bf04c9-933f-480e-a7d0-77e9398aab3c-gcp-creds\") pod \"hello-world-app-5d498dc89-9gb5k\" (UID: \"d9bf04c9-933f-480e-a7d0-77e9398aab3c\") " pod="default/hello-world-app-5d498dc89-9gb5k"
	Oct 18 14:20:36 addons-493618 kubelet[1280]: I1018 14:20:36.102380    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7mqh\" (UniqueName: \"kubernetes.io/projected/d9bf04c9-933f-480e-a7d0-77e9398aab3c-kube-api-access-n7mqh\") pod \"hello-world-app-5d498dc89-9gb5k\" (UID: \"d9bf04c9-933f-480e-a7d0-77e9398aab3c\") " pod="default/hello-world-app-5d498dc89-9gb5k"
	
	
	==> storage-provisioner [d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e] <==
	W1018 14:20:12.130829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:14.134067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:14.138071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:16.141938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:16.147180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:18.150970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:18.155003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:20.158665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:20.163056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:22.166365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:22.171018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:24.174490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:24.179261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:26.182973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:26.187107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:28.190056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:28.194049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:30.197616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:30.201349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:32.204803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:32.209877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:34.213740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:34.217688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:36.221794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:20:36.226966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-493618 -n addons-493618
helpers_test.go:269: (dbg) Run:  kubectl --context addons-493618 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-9gb5k task-pv-pod-restore ingress-nginx-admission-create-tnv6j ingress-nginx-admission-patch-vxb5f
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-493618 describe pod hello-world-app-5d498dc89-9gb5k task-pv-pod-restore ingress-nginx-admission-create-tnv6j ingress-nginx-admission-patch-vxb5f
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-493618 describe pod hello-world-app-5d498dc89-9gb5k task-pv-pod-restore ingress-nginx-admission-create-tnv6j ingress-nginx-admission-patch-vxb5f: exit status 1 (74.270291ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-9gb5k
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-493618/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:20:36 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n7mqh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n7mqh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-9gb5k to addons-493618
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-493618/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:18:55 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lwrqd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-lwrqd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  103s                default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-493618
	  Warning  Failed     39s                 kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     39s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    38s                 kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     38s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    25s (x2 over 103s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-tnv6j" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vxb5f" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-493618 describe pod hello-world-app-5d498dc89-9gb5k task-pv-pod-restore ingress-nginx-admission-create-tnv6j ingress-nginx-admission-patch-vxb5f: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-493618 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (232.854038ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:20:38.614408  109510 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:20:38.614644  109510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:20:38.614653  109510 out.go:374] Setting ErrFile to fd 2...
	I1018 14:20:38.614657  109510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:20:38.614840  109510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:20:38.615119  109510 mustload.go:65] Loading cluster: addons-493618
	I1018 14:20:38.615442  109510 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:20:38.615457  109510 addons.go:606] checking whether the cluster is paused
	I1018 14:20:38.615533  109510 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:20:38.615545  109510 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:20:38.615908  109510 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:20:38.633292  109510 ssh_runner.go:195] Run: systemctl --version
	I1018 14:20:38.633359  109510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:20:38.650770  109510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:20:38.746316  109510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:20:38.746391  109510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:20:38.776320  109510 cri.go:89] found id: "a2d90a4bb564c43991d5a0c84c81880730aa5a76930e356ff3a20d99954e1b06"
	I1018 14:20:38.776346  109510 cri.go:89] found id: "fcb7161ee1d1b988b7ab1002f35293f0b7091c9bac0d7bde7e8471426df926cf"
	I1018 14:20:38.776352  109510 cri.go:89] found id: "8f357a51c6b5dca76ccd31166d38783da025c5cbb4407847372de403612f8902"
	I1018 14:20:38.776357  109510 cri.go:89] found id: "530e145d6c2e072e1f708ba6909c358faf2489d1fb8aa08aa89f54faf841dbbf"
	I1018 14:20:38.776361  109510 cri.go:89] found id: "84cd4c11831db761e4325adee4879bf6a046512882f96bb1ca62c4882e6c8939"
	I1018 14:20:38.776366  109510 cri.go:89] found id: "10ae25ecd1d9000b05b1943468a2e37d7d7ad234a9cbda7cef9a4926bc9a192c"
	I1018 14:20:38.776370  109510 cri.go:89] found id: "50a19f5b596d4f200db4d0ab0cd9f04fbb94a5ef09f512aeefc81c7d4920b424"
	I1018 14:20:38.776374  109510 cri.go:89] found id: "859d5d72eef12d0f592a58b42cf0be1853b423f2c91a731d854b74ff79470e8f"
	I1018 14:20:38.776378  109510 cri.go:89] found id: "78aea4ac76ed251743d7541eaa9c63acd9c8c2af1858f2862ef1bb654b9423b3"
	I1018 14:20:38.776396  109510 cri.go:89] found id: "775733aea8bf0d456fffa861d7bbf1e97ecc24f79c468dcc0c8f760bfe0b6df0"
	I1018 14:20:38.776400  109510 cri.go:89] found id: "6673efa0776562914d4e7c35e4a4a121d60c7796edfc736b01120598fdf6dfda"
	I1018 14:20:38.776404  109510 cri.go:89] found id: "89679d50a3910712c81dd83e3ab5d310452f13a23c0dce3d8afeb4abec58b99f"
	I1018 14:20:38.776409  109510 cri.go:89] found id: "c52d44cde4f7100b3b7100db0eda92a6716001140bcb111fc1b8bad7bcffcd87"
	I1018 14:20:38.776416  109510 cri.go:89] found id: "92ceaca691f5147122ab362b3936a7201bede1ae2ac6288d04e5d0641f150e2f"
	I1018 14:20:38.776419  109510 cri.go:89] found id: "da0ddb2d0550b3553349e63e5796b3d59ac8c5a7d574c962118808b4750f4934"
	I1018 14:20:38.776434  109510 cri.go:89] found id: "a51f3eea295020493bcd77872d74756d33249e7b25591c8967346f770e49a885"
	I1018 14:20:38.776441  109510 cri.go:89] found id: "ca1869e801d6ed194d599a52234484412f6d9de897b6eadae30ca18325c92762"
	I1018 14:20:38.776446  109510 cri.go:89] found id: "7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca"
	I1018 14:20:38.776448  109510 cri.go:89] found id: "d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e"
	I1018 14:20:38.776451  109510 cri.go:89] found id: "778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750"
	I1018 14:20:38.776453  109510 cri.go:89] found id: "fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa"
	I1018 14:20:38.776456  109510 cri.go:89] found id: "f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4"
	I1018 14:20:38.776459  109510 cri.go:89] found id: "411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5"
	I1018 14:20:38.776461  109510 cri.go:89] found id: "857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1"
	I1018 14:20:38.776463  109510 cri.go:89] found id: "aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8"
	I1018 14:20:38.776465  109510 cri.go:89] found id: ""
	I1018 14:20:38.776505  109510 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 14:20:38.791007  109510 out.go:203] 
	W1018 14:20:38.792352  109510 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:20:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:20:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 14:20:38.792374  109510 out.go:285] * 
	* 
	W1018 14:20:38.797604  109510 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 14:20:38.798936  109510 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-493618 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-493618 addons disable ingress --alsologtostderr -v=1: exit status 11 (234.99662ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:20:38.848873  109572 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:20:38.848993  109572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:20:38.849002  109572 out.go:374] Setting ErrFile to fd 2...
	I1018 14:20:38.849006  109572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:20:38.849183  109572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:20:38.849459  109572 mustload.go:65] Loading cluster: addons-493618
	I1018 14:20:38.849795  109572 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:20:38.849812  109572 addons.go:606] checking whether the cluster is paused
	I1018 14:20:38.849890  109572 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:20:38.849902  109572 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:20:38.850363  109572 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:20:38.869610  109572 ssh_runner.go:195] Run: systemctl --version
	I1018 14:20:38.869682  109572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:20:38.887254  109572 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:20:38.983550  109572 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:20:38.983644  109572 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:20:39.013115  109572 cri.go:89] found id: "a2d90a4bb564c43991d5a0c84c81880730aa5a76930e356ff3a20d99954e1b06"
	I1018 14:20:39.013138  109572 cri.go:89] found id: "fcb7161ee1d1b988b7ab1002f35293f0b7091c9bac0d7bde7e8471426df926cf"
	I1018 14:20:39.013142  109572 cri.go:89] found id: "8f357a51c6b5dca76ccd31166d38783da025c5cbb4407847372de403612f8902"
	I1018 14:20:39.013145  109572 cri.go:89] found id: "530e145d6c2e072e1f708ba6909c358faf2489d1fb8aa08aa89f54faf841dbbf"
	I1018 14:20:39.013153  109572 cri.go:89] found id: "84cd4c11831db761e4325adee4879bf6a046512882f96bb1ca62c4882e6c8939"
	I1018 14:20:39.013156  109572 cri.go:89] found id: "10ae25ecd1d9000b05b1943468a2e37d7d7ad234a9cbda7cef9a4926bc9a192c"
	I1018 14:20:39.013158  109572 cri.go:89] found id: "50a19f5b596d4f200db4d0ab0cd9f04fbb94a5ef09f512aeefc81c7d4920b424"
	I1018 14:20:39.013161  109572 cri.go:89] found id: "859d5d72eef12d0f592a58b42cf0be1853b423f2c91a731d854b74ff79470e8f"
	I1018 14:20:39.013164  109572 cri.go:89] found id: "78aea4ac76ed251743d7541eaa9c63acd9c8c2af1858f2862ef1bb654b9423b3"
	I1018 14:20:39.013169  109572 cri.go:89] found id: "775733aea8bf0d456fffa861d7bbf1e97ecc24f79c468dcc0c8f760bfe0b6df0"
	I1018 14:20:39.013171  109572 cri.go:89] found id: "6673efa0776562914d4e7c35e4a4a121d60c7796edfc736b01120598fdf6dfda"
	I1018 14:20:39.013174  109572 cri.go:89] found id: "89679d50a3910712c81dd83e3ab5d310452f13a23c0dce3d8afeb4abec58b99f"
	I1018 14:20:39.013176  109572 cri.go:89] found id: "c52d44cde4f7100b3b7100db0eda92a6716001140bcb111fc1b8bad7bcffcd87"
	I1018 14:20:39.013178  109572 cri.go:89] found id: "92ceaca691f5147122ab362b3936a7201bede1ae2ac6288d04e5d0641f150e2f"
	I1018 14:20:39.013181  109572 cri.go:89] found id: "da0ddb2d0550b3553349e63e5796b3d59ac8c5a7d574c962118808b4750f4934"
	I1018 14:20:39.013185  109572 cri.go:89] found id: "a51f3eea295020493bcd77872d74756d33249e7b25591c8967346f770e49a885"
	I1018 14:20:39.013187  109572 cri.go:89] found id: "ca1869e801d6ed194d599a52234484412f6d9de897b6eadae30ca18325c92762"
	I1018 14:20:39.013190  109572 cri.go:89] found id: "7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca"
	I1018 14:20:39.013193  109572 cri.go:89] found id: "d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e"
	I1018 14:20:39.013195  109572 cri.go:89] found id: "778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750"
	I1018 14:20:39.013200  109572 cri.go:89] found id: "fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa"
	I1018 14:20:39.013202  109572 cri.go:89] found id: "f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4"
	I1018 14:20:39.013204  109572 cri.go:89] found id: "411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5"
	I1018 14:20:39.013206  109572 cri.go:89] found id: "857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1"
	I1018 14:20:39.013208  109572 cri.go:89] found id: "aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8"
	I1018 14:20:39.013211  109572 cri.go:89] found id: ""
	I1018 14:20:39.013246  109572 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 14:20:39.026956  109572 out.go:203] 
	W1018 14:20:39.028261  109572 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:20:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:20:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 14:20:39.028282  109572 out.go:285] * 
	* 
	W1018 14:20:39.033168  109572 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 14:20:39.034441  109572 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-493618 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.88s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-vm8lx" [27fb999f-070c-412d-a609-17b2eb175d4d] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003446629s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-493618 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (234.793763ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:18:19.709075  105651 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:18:19.709175  105651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:19.709181  105651 out.go:374] Setting ErrFile to fd 2...
	I1018 14:18:19.709186  105651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:19.709362  105651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:18:19.709663  105651 mustload.go:65] Loading cluster: addons-493618
	I1018 14:18:19.710025  105651 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:19.710042  105651 addons.go:606] checking whether the cluster is paused
	I1018 14:18:19.710121  105651 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:19.710133  105651 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:18:19.710496  105651 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:18:19.727905  105651 ssh_runner.go:195] Run: systemctl --version
	I1018 14:18:19.728015  105651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:18:19.745974  105651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:18:19.841554  105651 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:18:19.841645  105651 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:18:19.869785  105651 cri.go:89] found id: "fcb7161ee1d1b988b7ab1002f35293f0b7091c9bac0d7bde7e8471426df926cf"
	I1018 14:18:19.869811  105651 cri.go:89] found id: "8f357a51c6b5dca76ccd31166d38783da025c5cbb4407847372de403612f8902"
	I1018 14:18:19.869818  105651 cri.go:89] found id: "530e145d6c2e072e1f708ba6909c358faf2489d1fb8aa08aa89f54faf841dbbf"
	I1018 14:18:19.869822  105651 cri.go:89] found id: "84cd4c11831db761e4325adee4879bf6a046512882f96bb1ca62c4882e6c8939"
	I1018 14:18:19.869827  105651 cri.go:89] found id: "10ae25ecd1d9000b05b1943468a2e37d7d7ad234a9cbda7cef9a4926bc9a192c"
	I1018 14:18:19.869832  105651 cri.go:89] found id: "50a19f5b596d4f200db4d0ab0cd9f04fbb94a5ef09f512aeefc81c7d4920b424"
	I1018 14:18:19.869837  105651 cri.go:89] found id: "859d5d72eef12d0f592a58b42cf0be1853b423f2c91a731d854b74ff79470e8f"
	I1018 14:18:19.869840  105651 cri.go:89] found id: "78aea4ac76ed251743d7541eaa9c63acd9c8c2af1858f2862ef1bb654b9423b3"
	I1018 14:18:19.869853  105651 cri.go:89] found id: "775733aea8bf0d456fffa861d7bbf1e97ecc24f79c468dcc0c8f760bfe0b6df0"
	I1018 14:18:19.869861  105651 cri.go:89] found id: "6673efa0776562914d4e7c35e4a4a121d60c7796edfc736b01120598fdf6dfda"
	I1018 14:18:19.869864  105651 cri.go:89] found id: "89679d50a3910712c81dd83e3ab5d310452f13a23c0dce3d8afeb4abec58b99f"
	I1018 14:18:19.869873  105651 cri.go:89] found id: "c52d44cde4f7100b3b7100db0eda92a6716001140bcb111fc1b8bad7bcffcd87"
	I1018 14:18:19.869878  105651 cri.go:89] found id: "92ceaca691f5147122ab362b3936a7201bede1ae2ac6288d04e5d0641f150e2f"
	I1018 14:18:19.869881  105651 cri.go:89] found id: "da0ddb2d0550b3553349e63e5796b3d59ac8c5a7d574c962118808b4750f4934"
	I1018 14:18:19.869884  105651 cri.go:89] found id: "a51f3eea295020493bcd77872d74756d33249e7b25591c8967346f770e49a885"
	I1018 14:18:19.869891  105651 cri.go:89] found id: "ca1869e801d6ed194d599a52234484412f6d9de897b6eadae30ca18325c92762"
	I1018 14:18:19.869893  105651 cri.go:89] found id: "7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca"
	I1018 14:18:19.869896  105651 cri.go:89] found id: "d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e"
	I1018 14:18:19.869898  105651 cri.go:89] found id: "778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750"
	I1018 14:18:19.869901  105651 cri.go:89] found id: "fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa"
	I1018 14:18:19.869906  105651 cri.go:89] found id: "f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4"
	I1018 14:18:19.869908  105651 cri.go:89] found id: "411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5"
	I1018 14:18:19.869922  105651 cri.go:89] found id: "857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1"
	I1018 14:18:19.869927  105651 cri.go:89] found id: "aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8"
	I1018 14:18:19.869931  105651 cri.go:89] found id: ""
	I1018 14:18:19.869975  105651 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 14:18:19.884353  105651 out.go:203] 
	W1018 14:18:19.885654  105651 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 14:18:19.885676  105651 out.go:285] * 
	* 
	W1018 14:18:19.890578  105651 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 14:18:19.891953  105651 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-493618 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.23588ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-hzzlq" [93c9f378-d616-4060-a537-9060c4ce996a] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002645457s
addons_test.go:463: (dbg) Run:  kubectl --context addons-493618 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-493618 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (238.470859ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:18:11.964563  104452 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:18:11.964819  104452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:11.964833  104452 out.go:374] Setting ErrFile to fd 2...
	I1018 14:18:11.964838  104452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:11.965093  104452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:18:11.965370  104452 mustload.go:65] Loading cluster: addons-493618
	I1018 14:18:11.965731  104452 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:11.965748  104452 addons.go:606] checking whether the cluster is paused
	I1018 14:18:11.965832  104452 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:11.965846  104452 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:18:11.966246  104452 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:18:11.984833  104452 ssh_runner.go:195] Run: systemctl --version
	I1018 14:18:11.984886  104452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:18:12.002399  104452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:18:12.099705  104452 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:18:12.099816  104452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:18:12.130145  104452 cri.go:89] found id: "fcb7161ee1d1b988b7ab1002f35293f0b7091c9bac0d7bde7e8471426df926cf"
	I1018 14:18:12.130169  104452 cri.go:89] found id: "8f357a51c6b5dca76ccd31166d38783da025c5cbb4407847372de403612f8902"
	I1018 14:18:12.130173  104452 cri.go:89] found id: "530e145d6c2e072e1f708ba6909c358faf2489d1fb8aa08aa89f54faf841dbbf"
	I1018 14:18:12.130176  104452 cri.go:89] found id: "84cd4c11831db761e4325adee4879bf6a046512882f96bb1ca62c4882e6c8939"
	I1018 14:18:12.130178  104452 cri.go:89] found id: "10ae25ecd1d9000b05b1943468a2e37d7d7ad234a9cbda7cef9a4926bc9a192c"
	I1018 14:18:12.130181  104452 cri.go:89] found id: "50a19f5b596d4f200db4d0ab0cd9f04fbb94a5ef09f512aeefc81c7d4920b424"
	I1018 14:18:12.130184  104452 cri.go:89] found id: "859d5d72eef12d0f592a58b42cf0be1853b423f2c91a731d854b74ff79470e8f"
	I1018 14:18:12.130186  104452 cri.go:89] found id: "78aea4ac76ed251743d7541eaa9c63acd9c8c2af1858f2862ef1bb654b9423b3"
	I1018 14:18:12.130189  104452 cri.go:89] found id: "775733aea8bf0d456fffa861d7bbf1e97ecc24f79c468dcc0c8f760bfe0b6df0"
	I1018 14:18:12.130195  104452 cri.go:89] found id: "6673efa0776562914d4e7c35e4a4a121d60c7796edfc736b01120598fdf6dfda"
	I1018 14:18:12.130197  104452 cri.go:89] found id: "89679d50a3910712c81dd83e3ab5d310452f13a23c0dce3d8afeb4abec58b99f"
	I1018 14:18:12.130200  104452 cri.go:89] found id: "c52d44cde4f7100b3b7100db0eda92a6716001140bcb111fc1b8bad7bcffcd87"
	I1018 14:18:12.130216  104452 cri.go:89] found id: "92ceaca691f5147122ab362b3936a7201bede1ae2ac6288d04e5d0641f150e2f"
	I1018 14:18:12.130219  104452 cri.go:89] found id: "da0ddb2d0550b3553349e63e5796b3d59ac8c5a7d574c962118808b4750f4934"
	I1018 14:18:12.130221  104452 cri.go:89] found id: "a51f3eea295020493bcd77872d74756d33249e7b25591c8967346f770e49a885"
	I1018 14:18:12.130225  104452 cri.go:89] found id: "ca1869e801d6ed194d599a52234484412f6d9de897b6eadae30ca18325c92762"
	I1018 14:18:12.130228  104452 cri.go:89] found id: "7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca"
	I1018 14:18:12.130231  104452 cri.go:89] found id: "d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e"
	I1018 14:18:12.130234  104452 cri.go:89] found id: "778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750"
	I1018 14:18:12.130236  104452 cri.go:89] found id: "fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa"
	I1018 14:18:12.130238  104452 cri.go:89] found id: "f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4"
	I1018 14:18:12.130241  104452 cri.go:89] found id: "411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5"
	I1018 14:18:12.130243  104452 cri.go:89] found id: "857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1"
	I1018 14:18:12.130245  104452 cri.go:89] found id: "aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8"
	I1018 14:18:12.130247  104452 cri.go:89] found id: ""
	I1018 14:18:12.130285  104452 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 14:18:12.144662  104452 out.go:203] 
	W1018 14:18:12.145901  104452 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 14:18:12.145931  104452 out.go:285] * 
	* 
	W1018 14:18:12.150813  104452 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 14:18:12.152271  104452 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-493618 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (400.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1018 14:18:18.336084   93187 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1018 14:18:18.339673   93187 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1018 14:18:18.339705   93187 kapi.go:107] duration metric: took 3.635789ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.64792ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-493618 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-493618 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [945e23ea-7ddc-4163-8149-734254930996] Pending
helpers_test.go:352: "task-pv-pod" [945e23ea-7ddc-4163-8149-734254930996] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [945e23ea-7ddc-4163-8149-734254930996] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.00412961s
addons_test.go:572: (dbg) Run:  kubectl --context addons-493618 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-493618 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-493618 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-493618 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-493618 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-493618 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-493618 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [55a04f24-70ab-4ed9-9957-f15ef2c7f034] Pending
helpers_test.go:352: "task-pv-pod-restore" [55a04f24-70ab-4ed9-9957-f15ef2c7f034] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod-restore" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:609: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod-restore" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:609: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-493618 -n addons-493618
addons_test.go:609: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-10-18 14:24:55.994014631 +0000 UTC m=+596.501017580
addons_test.go:609: (dbg) Run:  kubectl --context addons-493618 describe po task-pv-pod-restore -n default
addons_test.go:609: (dbg) kubectl --context addons-493618 describe po task-pv-pod-restore -n default:
Name:             task-pv-pod-restore
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-493618/192.168.49.2
Start Time:       Sat, 18 Oct 2025 14:18:55 +0000
Labels:           app=task-pv-pod-restore
Annotations:      <none>
Status:           Pending
IP:               10.244.0.31
IPs:
IP:  10.244.0.31
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lwrqd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc-restore
ReadOnly:   false
kube-api-access-lwrqd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m1s                 default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-493618
Warning  Failed     3m9s                 kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     88s (x2 over 4m57s)  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     88s (x3 over 4m57s)  kubelet            Error: ErrImagePull
Normal   BackOff    62s (x4 over 4m56s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     62s (x4 over 4m56s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    47s (x4 over 6m1s)   kubelet            Pulling image "docker.io/nginx"
addons_test.go:609: (dbg) Run:  kubectl --context addons-493618 logs task-pv-pod-restore -n default
addons_test.go:609: (dbg) Non-zero exit: kubectl --context addons-493618 logs task-pv-pod-restore -n default: exit status 1 (68.913462ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod-restore" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:609: kubectl --context addons-493618 logs task-pv-pod-restore -n default: exit status 1
addons_test.go:610: failed waiting for pod task-pv-pod-restore: app=task-pv-pod-restore within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-493618
helpers_test.go:243: (dbg) docker inspect addons-493618:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748",
	        "Created": "2025-10-18T14:15:35.142040375Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 95181,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T14:15:35.183965001Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748/hosts",
	        "LogPath": "/var/lib/docker/containers/7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748/7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748-json.log",
	        "Name": "/addons-493618",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-493618:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-493618",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748",
	                "LowerDir": "/var/lib/docker/overlay2/6c53b42c7ae30be0a92aeee9616153aa98a76b8686b1ba574fed988eef723540-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c53b42c7ae30be0a92aeee9616153aa98a76b8686b1ba574fed988eef723540/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c53b42c7ae30be0a92aeee9616153aa98a76b8686b1ba574fed988eef723540/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c53b42c7ae30be0a92aeee9616153aa98a76b8686b1ba574fed988eef723540/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-493618",
	                "Source": "/var/lib/docker/volumes/addons-493618/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-493618",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-493618",
	                "name.minikube.sigs.k8s.io": "addons-493618",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a631e0cd76d05941fb0936045345b47fc87f5c3a110522f5c55a7218ec039637",
	            "SandboxKey": "/var/run/docker/netns/a631e0cd76d0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-493618": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:eb:b6:c3:02:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d904be0aa70c1af2cea11004150f1e24caa7082b6124c61db9de726e07acfb2f",
	                    "EndpointID": "8a31c67497c108fe079824c35877145f7cc3de3038048bb81926ece73d316513",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-493618",
	                        "7b0baa1647a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-493618 -n addons-493618
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-493618 logs -n 25: (1.190367596s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-035412 --alsologtostderr --binary-mirror http://127.0.0.1:38181 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-035412 │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │                     │
	│ delete  │ -p binary-mirror-035412                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-035412 │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │ 18 Oct 25 14:15 UTC │
	│ addons  │ disable dashboard -p addons-493618                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │                     │
	│ addons  │ enable dashboard -p addons-493618                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │                     │
	│ start   │ -p addons-493618 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │ 18 Oct 25 14:17 UTC │
	│ addons  │ addons-493618 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:17 UTC │                     │
	│ addons  │ addons-493618 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ enable headlamp -p addons-493618 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-493618 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-493618 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-493618 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-493618 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-493618 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-493618 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ ssh     │ addons-493618 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ ip      │ addons-493618 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │ 18 Oct 25 14:18 UTC │
	│ addons  │ addons-493618 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-493618                                                                                                                                                                                                                                                                                                                                                                                           │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │ 18 Oct 25 14:18 UTC │
	│ addons  │ addons-493618 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ addons-493618 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ ssh     │ addons-493618 ssh cat /opt/local-path-provisioner/pvc-a6ac2dbf-6d84-47b0-9a9a-79b9ddfd5256_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │ 18 Oct 25 14:18 UTC │
	│ addons  │ addons-493618 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ ip      │ addons-493618 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:20 UTC │ 18 Oct 25 14:20 UTC │
	│ addons  │ addons-493618 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:20 UTC │                     │
	│ addons  │ addons-493618 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-493618        │ jenkins │ v1.37.0 │ 18 Oct 25 14:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 14:15:10
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 14:15:10.844195   94518 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:15:10.844315   94518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:15:10.844327   94518 out.go:374] Setting ErrFile to fd 2...
	I1018 14:15:10.844333   94518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:15:10.844524   94518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:15:10.845093   94518 out.go:368] Setting JSON to false
	I1018 14:15:10.845947   94518 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7062,"bootTime":1760789849,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:15:10.846045   94518 start.go:141] virtualization: kvm guest
	I1018 14:15:10.847714   94518 out.go:179] * [addons-493618] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 14:15:10.849170   94518 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 14:15:10.849206   94518 notify.go:220] Checking for updates...
	I1018 14:15:10.851802   94518 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:15:10.852939   94518 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 14:15:10.854257   94518 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 14:15:10.855457   94518 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 14:15:10.856592   94518 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 14:15:10.857794   94518 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:15:10.881142   94518 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 14:15:10.881259   94518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:15:10.937968   94518 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-18 14:15:10.928477658 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:15:10.938071   94518 docker.go:318] overlay module found
	I1018 14:15:10.939805   94518 out.go:179] * Using the docker driver based on user configuration
	I1018 14:15:10.941011   94518 start.go:305] selected driver: docker
	I1018 14:15:10.941024   94518 start.go:925] validating driver "docker" against <nil>
	I1018 14:15:10.941035   94518 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 14:15:10.941568   94518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:15:10.999497   94518 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-18 14:15:10.990143183 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:15:10.999700   94518 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 14:15:10.999943   94518 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 14:15:11.001690   94518 out.go:179] * Using Docker driver with root privileges
	I1018 14:15:11.002970   94518 cni.go:84] Creating CNI manager for ""
	I1018 14:15:11.003053   94518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 14:15:11.003064   94518 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 14:15:11.003145   94518 start.go:349] cluster config:
	{Name:addons-493618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-493618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1018 14:15:11.004498   94518 out.go:179] * Starting "addons-493618" primary control-plane node in "addons-493618" cluster
	I1018 14:15:11.005651   94518 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 14:15:11.006976   94518 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 14:15:11.008175   94518 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:15:11.008218   94518 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 14:15:11.008231   94518 cache.go:58] Caching tarball of preloaded images
	I1018 14:15:11.008228   94518 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 14:15:11.008318   94518 preload.go:233] Found /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 14:15:11.008329   94518 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 14:15:11.008714   94518 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/config.json ...
	I1018 14:15:11.008737   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/config.json: {Name:mkdee9574b0b95000e535daf1bcb85983e767ae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:11.024821   94518 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 14:15:11.024970   94518 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 14:15:11.024989   94518 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 14:15:11.024994   94518 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 14:15:11.025001   94518 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 14:15:11.025006   94518 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 14:15:23.525530   94518 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 14:15:23.525596   94518 cache.go:232] Successfully downloaded all kic artifacts
	I1018 14:15:23.525645   94518 start.go:360] acquireMachinesLock for addons-493618: {Name:mkcf1dcaefe933480e3898dd01dccab4476df687 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 14:15:23.525773   94518 start.go:364] duration metric: took 97.675µs to acquireMachinesLock for "addons-493618"
	I1018 14:15:23.525804   94518 start.go:93] Provisioning new machine with config: &{Name:addons-493618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-493618 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 14:15:23.525942   94518 start.go:125] createHost starting for "" (driver="docker")
	I1018 14:15:23.527896   94518 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 14:15:23.528207   94518 start.go:159] libmachine.API.Create for "addons-493618" (driver="docker")
	I1018 14:15:23.528245   94518 client.go:168] LocalClient.Create starting
	I1018 14:15:23.528363   94518 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem
	I1018 14:15:23.977885   94518 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem
	I1018 14:15:24.038227   94518 cli_runner.go:164] Run: docker network inspect addons-493618 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 14:15:24.054247   94518 cli_runner.go:211] docker network inspect addons-493618 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 14:15:24.054314   94518 network_create.go:284] running [docker network inspect addons-493618] to gather additional debugging logs...
	I1018 14:15:24.054332   94518 cli_runner.go:164] Run: docker network inspect addons-493618
	W1018 14:15:24.070008   94518 cli_runner.go:211] docker network inspect addons-493618 returned with exit code 1
	I1018 14:15:24.070042   94518 network_create.go:287] error running [docker network inspect addons-493618]: docker network inspect addons-493618: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-493618 not found
	I1018 14:15:24.070073   94518 network_create.go:289] output of [docker network inspect addons-493618]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-493618 not found
	
	** /stderr **
	I1018 14:15:24.070206   94518 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 14:15:24.087173   94518 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e55a00}
	I1018 14:15:24.087222   94518 network_create.go:124] attempt to create docker network addons-493618 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 14:15:24.087280   94518 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-493618 addons-493618
	I1018 14:15:24.145261   94518 network_create.go:108] docker network addons-493618 192.168.49.0/24 created
	I1018 14:15:24.145291   94518 kic.go:121] calculated static IP "192.168.49.2" for the "addons-493618" container
	I1018 14:15:24.145378   94518 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 14:15:24.161100   94518 cli_runner.go:164] Run: docker volume create addons-493618 --label name.minikube.sigs.k8s.io=addons-493618 --label created_by.minikube.sigs.k8s.io=true
	I1018 14:15:24.178649   94518 oci.go:103] Successfully created a docker volume addons-493618
	I1018 14:15:24.178727   94518 cli_runner.go:164] Run: docker run --rm --name addons-493618-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-493618 --entrypoint /usr/bin/test -v addons-493618:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 14:15:30.677122   94518 cli_runner.go:217] Completed: docker run --rm --name addons-493618-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-493618 --entrypoint /usr/bin/test -v addons-493618:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (6.49835529s)
	I1018 14:15:30.677159   94518 oci.go:107] Successfully prepared a docker volume addons-493618
	I1018 14:15:30.677190   94518 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:15:30.677212   94518 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 14:15:30.677277   94518 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-493618:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 14:15:35.066928   94518 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-493618:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.389587346s)
	I1018 14:15:35.066965   94518 kic.go:203] duration metric: took 4.38974774s to extract preloaded images to volume ...
	W1018 14:15:35.067065   94518 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 14:15:35.067125   94518 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 14:15:35.067165   94518 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 14:15:35.125586   94518 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-493618 --name addons-493618 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-493618 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-493618 --network addons-493618 --ip 192.168.49.2 --volume addons-493618:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 14:15:35.438654   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Running}}
	I1018 14:15:35.457572   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:35.476400   94518 cli_runner.go:164] Run: docker exec addons-493618 stat /var/lib/dpkg/alternatives/iptables
	I1018 14:15:35.523494   94518 oci.go:144] the created container "addons-493618" has a running status.
	I1018 14:15:35.523536   94518 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa...
	I1018 14:15:35.628924   94518 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 14:15:35.654055   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:35.673745   94518 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 14:15:35.673769   94518 kic_runner.go:114] Args: [docker exec --privileged addons-493618 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 14:15:35.716664   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:35.738950   94518 machine.go:93] provisionDockerMachine start ...
	I1018 14:15:35.739054   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:35.761798   94518 main.go:141] libmachine: Using SSH client type: native
	I1018 14:15:35.762148   94518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 14:15:35.762167   94518 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 14:15:35.762887   94518 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38702->127.0.0.1:32768: read: connection reset by peer
	I1018 14:15:38.898415   94518 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-493618
	
	I1018 14:15:38.898444   94518 ubuntu.go:182] provisioning hostname "addons-493618"
	I1018 14:15:38.898497   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:38.915941   94518 main.go:141] libmachine: Using SSH client type: native
	I1018 14:15:38.916229   94518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 14:15:38.916247   94518 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-493618 && echo "addons-493618" | sudo tee /etc/hostname
	I1018 14:15:39.059322   94518 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-493618
	
	I1018 14:15:39.059403   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:39.077377   94518 main.go:141] libmachine: Using SSH client type: native
	I1018 14:15:39.077594   94518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 14:15:39.077611   94518 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-493618' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-493618/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-493618' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 14:15:39.210493   94518 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 14:15:39.210526   94518 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 14:15:39.210562   94518 ubuntu.go:190] setting up certificates
	I1018 14:15:39.210574   94518 provision.go:84] configureAuth start
	I1018 14:15:39.210640   94518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-493618
	I1018 14:15:39.227138   94518 provision.go:143] copyHostCerts
	I1018 14:15:39.227219   94518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 14:15:39.227331   94518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 14:15:39.227397   94518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 14:15:39.227463   94518 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.addons-493618 san=[127.0.0.1 192.168.49.2 addons-493618 localhost minikube]
	I1018 14:15:39.766960   94518 provision.go:177] copyRemoteCerts
	I1018 14:15:39.767023   94518 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 14:15:39.767059   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:39.785116   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:39.881305   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 14:15:39.900749   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 14:15:39.918059   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 14:15:39.936428   94518 provision.go:87] duration metric: took 725.836064ms to configureAuth
	I1018 14:15:39.936459   94518 ubuntu.go:206] setting minikube options for container-runtime
	I1018 14:15:39.936620   94518 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:15:39.936726   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:39.953814   94518 main.go:141] libmachine: Using SSH client type: native
	I1018 14:15:39.954104   94518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 14:15:39.954132   94518 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 14:15:40.197505   94518 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 14:15:40.197532   94518 machine.go:96] duration metric: took 4.458558157s to provisionDockerMachine
	I1018 14:15:40.197544   94518 client.go:171] duration metric: took 16.669289178s to LocalClient.Create
	I1018 14:15:40.197568   94518 start.go:167] duration metric: took 16.669361804s to libmachine.API.Create "addons-493618"
	I1018 14:15:40.197580   94518 start.go:293] postStartSetup for "addons-493618" (driver="docker")
	I1018 14:15:40.197594   94518 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 14:15:40.197676   94518 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 14:15:40.197732   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:40.214597   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:40.313123   94518 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 14:15:40.316613   94518 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 14:15:40.316636   94518 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 14:15:40.316649   94518 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/addons for local assets ...
	I1018 14:15:40.316713   94518 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/files for local assets ...
	I1018 14:15:40.316739   94518 start.go:296] duration metric: took 119.152647ms for postStartSetup
	I1018 14:15:40.317068   94518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-493618
	I1018 14:15:40.334170   94518 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/config.json ...
	I1018 14:15:40.334433   94518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 14:15:40.334480   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:40.351086   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:40.444185   94518 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 14:15:40.448983   94518 start.go:128] duration metric: took 16.923022705s to createHost
	I1018 14:15:40.449022   94518 start.go:83] releasing machines lock for "addons-493618", held for 16.923231309s
	I1018 14:15:40.449108   94518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-493618
	I1018 14:15:40.466240   94518 ssh_runner.go:195] Run: cat /version.json
	I1018 14:15:40.466278   94518 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 14:15:40.466315   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:40.466349   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:40.483258   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:40.484430   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:40.575602   94518 ssh_runner.go:195] Run: systemctl --version
	I1018 14:15:40.630562   94518 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 14:15:40.667185   94518 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 14:15:40.672266   94518 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 14:15:40.672342   94518 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 14:15:40.699256   94518 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 14:15:40.699280   94518 start.go:495] detecting cgroup driver to use...
	I1018 14:15:40.699309   94518 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 14:15:40.699382   94518 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 14:15:40.716022   94518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 14:15:40.728685   94518 docker.go:218] disabling cri-docker service (if available) ...
	I1018 14:15:40.728735   94518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 14:15:40.745467   94518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 14:15:40.763518   94518 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 14:15:40.852188   94518 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 14:15:40.941218   94518 docker.go:234] disabling docker service ...
	I1018 14:15:40.941291   94518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 14:15:40.960280   94518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 14:15:40.973519   94518 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 14:15:41.063896   94518 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 14:15:41.148959   94518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 14:15:41.161676   94518 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 14:15:41.176951   94518 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 14:15:41.177026   94518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.187952   94518 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 14:15:41.188013   94518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.197200   94518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.206326   94518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.215130   94518 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 14:15:41.223534   94518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.233043   94518 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.246975   94518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.256324   94518 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 14:15:41.263987   94518 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 14:15:41.264069   94518 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 14:15:41.276695   94518 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 14:15:41.284747   94518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:15:41.360872   94518 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 14:15:41.466951   94518 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 14:15:41.467031   94518 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 14:15:41.471440   94518 start.go:563] Will wait 60s for crictl version
	I1018 14:15:41.471517   94518 ssh_runner.go:195] Run: which crictl
	I1018 14:15:41.475466   94518 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 14:15:41.500862   94518 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 14:15:41.500988   94518 ssh_runner.go:195] Run: crio --version
	I1018 14:15:41.529363   94518 ssh_runner.go:195] Run: crio --version
	I1018 14:15:41.558832   94518 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 14:15:41.560098   94518 cli_runner.go:164] Run: docker network inspect addons-493618 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 14:15:41.577556   94518 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 14:15:41.581897   94518 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 14:15:41.592876   94518 kubeadm.go:883] updating cluster {Name:addons-493618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-493618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 14:15:41.593049   94518 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:15:41.593097   94518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 14:15:41.626577   94518 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 14:15:41.626599   94518 crio.go:433] Images already preloaded, skipping extraction
	I1018 14:15:41.626659   94518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 14:15:41.651828   94518 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 14:15:41.651853   94518 cache_images.go:85] Images are preloaded, skipping loading
	I1018 14:15:41.651862   94518 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 14:15:41.651985   94518 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-493618 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-493618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 14:15:41.652054   94518 ssh_runner.go:195] Run: crio config
	I1018 14:15:41.697070   94518 cni.go:84] Creating CNI manager for ""
	I1018 14:15:41.697097   94518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 14:15:41.697114   94518 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 14:15:41.697135   94518 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-493618 NodeName:addons-493618 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 14:15:41.697247   94518 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-493618"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 14:15:41.697307   94518 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 14:15:41.705749   94518 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 14:15:41.705816   94518 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 14:15:41.714036   94518 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 14:15:41.727518   94518 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 14:15:41.743540   94518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1018 14:15:41.757431   94518 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 14:15:41.761307   94518 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 14:15:41.771339   94518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:15:41.848842   94518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 14:15:41.872471   94518 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618 for IP: 192.168.49.2
	I1018 14:15:41.872502   94518 certs.go:195] generating shared ca certs ...
	I1018 14:15:41.872543   94518 certs.go:227] acquiring lock for ca certs: {Name:mk6d3b4e44856f498157352ffe8bb89d2c7a3998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:41.872726   94518 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key
	I1018 14:15:42.099521   94518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt ...
	I1018 14:15:42.099554   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt: {Name:mk29e474ac49378e3174669d30b699a0927d5939 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.099735   94518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key ...
	I1018 14:15:42.099748   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key: {Name:mk3df07768d76076523553d14b395d7aec695d8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.099827   94518 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key
	I1018 14:15:42.250081   94518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt ...
	I1018 14:15:42.250114   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt: {Name:mk9a000c7e66e15e6c70533a617d97af7b9526d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.250286   94518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key ...
	I1018 14:15:42.250299   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key: {Name:mked80e35481d07e9d2732a63324e9497996df0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.250389   94518 certs.go:257] generating profile certs ...
	I1018 14:15:42.250444   94518 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.key
	I1018 14:15:42.250458   94518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt with IP's: []
	I1018 14:15:42.310573   94518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt ...
	I1018 14:15:42.310609   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: {Name:mk817a96b6e7e4f2d967cd0f6b75836e15e32578 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.310772   94518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.key ...
	I1018 14:15:42.310783   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.key: {Name:mk2dc922e6933c9c6580f2453368c5810f4e481e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.310862   94518 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.key.4bff4883
	I1018 14:15:42.310880   94518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.crt.4bff4883 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 14:15:42.431608   94518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.crt.4bff4883 ...
	I1018 14:15:42.431643   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.crt.4bff4883: {Name:mkde2f0f0e05a8a44b434974d8b466c73645d4e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.431833   94518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.key.4bff4883 ...
	I1018 14:15:42.431850   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.key.4bff4883: {Name:mk6d2906da3206d1dab9c1811118ad12e5d1f944 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.431945   94518 certs.go:382] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.crt.4bff4883 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.crt
	I1018 14:15:42.432038   94518 certs.go:386] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.key.4bff4883 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.key
	I1018 14:15:42.432090   94518 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.key
	I1018 14:15:42.432109   94518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.crt with IP's: []
	I1018 14:15:42.629593   94518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.crt ...
	I1018 14:15:42.629624   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.crt: {Name:mkde5d9905c941564c933979fd5fade029103944 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.629812   94518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.key ...
	I1018 14:15:42.629826   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.key: {Name:mk36751e3ce77bf92cb13f27a98497c7ed9795bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.630014   94518 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 14:15:42.630049   94518 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem (1082 bytes)
	I1018 14:15:42.630071   94518 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem (1123 bytes)
	I1018 14:15:42.630096   94518 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem (1675 bytes)
	I1018 14:15:42.630764   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 14:15:42.650117   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 14:15:42.669226   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 14:15:42.690282   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 14:15:42.710069   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 14:15:42.728502   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 14:15:42.746298   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 14:15:42.764293   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 14:15:42.782203   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 14:15:42.801956   94518 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 14:15:42.814811   94518 ssh_runner.go:195] Run: openssl version
	I1018 14:15:42.821181   94518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 14:15:42.832594   94518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:15:42.836604   94518 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:15:42.836664   94518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:15:42.871729   94518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 14:15:42.881086   94518 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 14:15:42.884965   94518 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 14:15:42.885020   94518 kubeadm.go:400] StartCluster: {Name:addons-493618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-493618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:15:42.885113   94518 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:15:42.885177   94518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:15:42.913223   94518 cri.go:89] found id: ""
	I1018 14:15:42.913289   94518 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 14:15:42.921815   94518 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 14:15:42.930869   94518 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 14:15:42.930952   94518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 14:15:42.939927   94518 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 14:15:42.939956   94518 kubeadm.go:157] found existing configuration files:
	
	I1018 14:15:42.940012   94518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 14:15:42.948083   94518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 14:15:42.948160   94518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 14:15:42.955881   94518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 14:15:42.963517   94518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 14:15:42.963574   94518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 14:15:42.971090   94518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 14:15:42.979262   94518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 14:15:42.979341   94518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 14:15:42.986704   94518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 14:15:42.994650   94518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 14:15:42.994702   94518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 14:15:43.002430   94518 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 14:15:43.040520   94518 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 14:15:43.040577   94518 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 14:15:43.062959   94518 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 14:15:43.063081   94518 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 14:15:43.063146   94518 kubeadm.go:318] OS: Linux
	I1018 14:15:43.063197   94518 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 14:15:43.063262   94518 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 14:15:43.063319   94518 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 14:15:43.063359   94518 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 14:15:43.063397   94518 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 14:15:43.063445   94518 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 14:15:43.063497   94518 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 14:15:43.063534   94518 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 14:15:43.122707   94518 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 14:15:43.122870   94518 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 14:15:43.123048   94518 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 14:15:43.130408   94518 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 14:15:43.132493   94518 out.go:252]   - Generating certificates and keys ...
	I1018 14:15:43.132580   94518 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 14:15:43.132638   94518 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 14:15:43.195493   94518 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 14:15:43.335589   94518 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 14:15:43.540635   94518 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 14:15:43.653902   94518 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 14:15:43.807694   94518 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 14:15:43.807847   94518 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-493618 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 14:15:43.853102   94518 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 14:15:43.853283   94518 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-493618 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 14:15:43.971707   94518 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 14:15:44.039605   94518 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 14:15:44.636757   94518 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 14:15:44.636886   94518 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 14:15:45.211213   94518 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 14:15:45.796318   94518 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 14:15:45.822982   94518 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 14:15:46.106180   94518 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 14:15:46.239037   94518 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 14:15:46.239513   94518 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 14:15:46.243151   94518 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 14:15:46.244760   94518 out.go:252]   - Booting up control plane ...
	I1018 14:15:46.244874   94518 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 14:15:46.244990   94518 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 14:15:46.245625   94518 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 14:15:46.260250   94518 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 14:15:46.260360   94518 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 14:15:46.267696   94518 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 14:15:46.267817   94518 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 14:15:46.267866   94518 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 14:15:46.370744   94518 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 14:15:46.370865   94518 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 14:15:47.371649   94518 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000990385s
	I1018 14:15:47.376256   94518 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 14:15:47.376432   94518 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 14:15:47.376566   94518 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 14:15:47.376709   94518 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 14:15:49.135751   94518 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.759510931s
	I1018 14:15:49.255604   94518 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.879264109s
	I1018 14:15:50.878424   94518 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.502192934s
	I1018 14:15:50.890048   94518 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 14:15:50.901423   94518 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 14:15:50.910227   94518 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 14:15:50.910432   94518 kubeadm.go:318] [mark-control-plane] Marking the node addons-493618 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 14:15:50.918188   94518 kubeadm.go:318] [bootstrap-token] Using token: 2jy7nx.1zs0hlvym10ojzfo
	I1018 14:15:50.919589   94518 out.go:252]   - Configuring RBAC rules ...
	I1018 14:15:50.919736   94518 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 14:15:50.923222   94518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 14:15:50.928452   94518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 14:15:50.931223   94518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 14:15:50.933641   94518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 14:15:50.937165   94518 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 14:15:51.285114   94518 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 14:15:51.702798   94518 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 14:15:52.284201   94518 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 14:15:52.285014   94518 kubeadm.go:318] 
	I1018 14:15:52.285123   94518 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 14:15:52.285134   94518 kubeadm.go:318] 
	I1018 14:15:52.285253   94518 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 14:15:52.285261   94518 kubeadm.go:318] 
	I1018 14:15:52.285297   94518 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 14:15:52.285409   94518 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 14:15:52.285497   94518 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 14:15:52.285507   94518 kubeadm.go:318] 
	I1018 14:15:52.285594   94518 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 14:15:52.285604   94518 kubeadm.go:318] 
	I1018 14:15:52.285673   94518 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 14:15:52.285694   94518 kubeadm.go:318] 
	I1018 14:15:52.285777   94518 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 14:15:52.285856   94518 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 14:15:52.285945   94518 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 14:15:52.285954   94518 kubeadm.go:318] 
	I1018 14:15:52.286046   94518 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 14:15:52.286158   94518 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 14:15:52.286173   94518 kubeadm.go:318] 
	I1018 14:15:52.286260   94518 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 2jy7nx.1zs0hlvym10ojzfo \
	I1018 14:15:52.286412   94518 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9845daa2461a34dd973607f8353b114051f1b03075ff9500a74fb21865e04329 \
	I1018 14:15:52.286450   94518 kubeadm.go:318] 	--control-plane 
	I1018 14:15:52.286458   94518 kubeadm.go:318] 
	I1018 14:15:52.286553   94518 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 14:15:52.286561   94518 kubeadm.go:318] 
	I1018 14:15:52.286655   94518 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 2jy7nx.1zs0hlvym10ojzfo \
	I1018 14:15:52.286798   94518 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9845daa2461a34dd973607f8353b114051f1b03075ff9500a74fb21865e04329 
	I1018 14:15:52.288880   94518 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 14:15:52.289078   94518 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 14:15:52.289109   94518 cni.go:84] Creating CNI manager for ""
	I1018 14:15:52.289123   94518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 14:15:52.290888   94518 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 14:15:52.292177   94518 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 14:15:52.296572   94518 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 14:15:52.296594   94518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 14:15:52.309832   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 14:15:52.517329   94518 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 14:15:52.517424   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:52.517457   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-493618 minikube.k8s.io/updated_at=2025_10_18T14_15_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=addons-493618 minikube.k8s.io/primary=true
	I1018 14:15:52.601850   94518 ops.go:34] apiserver oom_adj: -16
	I1018 14:15:52.601988   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:53.102345   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:53.602765   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:54.102512   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:54.602301   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:55.102326   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:55.602077   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:56.102665   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:56.602275   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:57.102902   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:57.602898   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:57.666050   94518 kubeadm.go:1113] duration metric: took 5.148697107s to wait for elevateKubeSystemPrivileges
	I1018 14:15:57.666085   94518 kubeadm.go:402] duration metric: took 14.781070154s to StartCluster
	I1018 14:15:57.666113   94518 settings.go:142] acquiring lock: {Name:mk9d9d72167811b3554435e137ed17137bf54a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:57.666241   94518 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 14:15:57.666666   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/kubeconfig: {Name:mkdc6fbc9be8e57447490b30dd5bb0086611876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:57.666904   94518 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 14:15:57.666964   94518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 14:15:57.667023   94518 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 14:15:57.667176   94518 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:15:57.667191   94518 addons.go:69] Setting ingress-dns=true in profile "addons-493618"
	I1018 14:15:57.667213   94518 addons.go:238] Setting addon ingress-dns=true in "addons-493618"
	I1018 14:15:57.667219   94518 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-493618"
	I1018 14:15:57.667224   94518 addons.go:69] Setting cloud-spanner=true in profile "addons-493618"
	I1018 14:15:57.667225   94518 addons.go:69] Setting yakd=true in profile "addons-493618"
	I1018 14:15:57.667237   94518 addons.go:238] Setting addon cloud-spanner=true in "addons-493618"
	I1018 14:15:57.667243   94518 addons.go:238] Setting addon yakd=true in "addons-493618"
	I1018 14:15:57.667261   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667270   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667306   94518 addons.go:69] Setting registry-creds=true in profile "addons-493618"
	I1018 14:15:57.667319   94518 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-493618"
	I1018 14:15:57.667325   94518 addons.go:238] Setting addon registry-creds=true in "addons-493618"
	I1018 14:15:57.667340   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667333   94518 addons.go:69] Setting ingress=true in profile "addons-493618"
	I1018 14:15:57.667353   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667362   94518 addons.go:238] Setting addon ingress=true in "addons-493618"
	I1018 14:15:57.667347   94518 addons.go:69] Setting gcp-auth=true in profile "addons-493618"
	I1018 14:15:57.667379   94518 addons.go:69] Setting inspektor-gadget=true in profile "addons-493618"
	I1018 14:15:57.667395   94518 addons.go:238] Setting addon inspektor-gadget=true in "addons-493618"
	I1018 14:15:57.667413   94518 mustload.go:65] Loading cluster: addons-493618
	I1018 14:15:57.667421   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667425   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667659   94518 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:15:57.667849   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667856   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667873   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667881   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667885   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667927   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667957   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667977   94518 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-493618"
	I1018 14:15:57.667997   94518 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-493618"
	I1018 14:15:57.668260   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.668538   94518 addons.go:69] Setting volcano=true in profile "addons-493618"
	I1018 14:15:57.668558   94518 addons.go:238] Setting addon volcano=true in "addons-493618"
	I1018 14:15:57.668585   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.668706   94518 addons.go:69] Setting default-storageclass=true in profile "addons-493618"
	I1018 14:15:57.668731   94518 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-493618"
	I1018 14:15:57.668892   94518 addons.go:69] Setting volumesnapshots=true in profile "addons-493618"
	I1018 14:15:57.668932   94518 addons.go:238] Setting addon volumesnapshots=true in "addons-493618"
	I1018 14:15:57.668964   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.669072   94518 addons.go:69] Setting storage-provisioner=true in profile "addons-493618"
	I1018 14:15:57.669100   94518 addons.go:238] Setting addon storage-provisioner=true in "addons-493618"
	I1018 14:15:57.669121   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.669367   94518 out.go:179] * Verifying Kubernetes components...
	I1018 14:15:57.667211   94518 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-493618"
	I1018 14:15:57.669415   94518 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-493618"
	I1018 14:15:57.669445   94518 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-493618"
	I1018 14:15:57.669466   94518 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-493618"
	I1018 14:15:57.669478   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.669495   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.669783   94518 addons.go:69] Setting registry=true in profile "addons-493618"
	I1018 14:15:57.669803   94518 addons.go:238] Setting addon registry=true in "addons-493618"
	I1018 14:15:57.669828   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667372   94518 addons.go:69] Setting metrics-server=true in profile "addons-493618"
	I1018 14:15:57.670134   94518 addons.go:238] Setting addon metrics-server=true in "addons-493618"
	I1018 14:15:57.670161   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667262   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.671078   94518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:15:57.677610   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.677633   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.678278   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.678433   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.680282   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.683274   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.686374   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.687318   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.687981   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.726980   94518 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 14:15:57.727164   94518 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 14:15:57.728296   94518 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 14:15:57.728322   94518 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 14:15:57.728394   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.731709   94518 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 14:15:57.735505   94518 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 14:15:57.735529   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 14:15:57.735623   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.744401   94518 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 14:15:57.746166   94518 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 14:15:57.746193   94518 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 14:15:57.746276   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.753364   94518 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-493618"
	I1018 14:15:57.753422   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.753977   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.757779   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.760961   94518 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 14:15:57.761050   94518 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 14:15:57.761128   94518 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 14:15:57.765412   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 14:15:57.765469   94518 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 14:15:57.765570   94518 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 14:15:57.765575   94518 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 14:15:57.765590   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 14:15:57.765649   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.765678   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.773672   94518 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 14:15:57.782459   94518 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 14:15:57.782523   94518 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 14:15:57.782594   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.782951   94518 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:15:57.783453   94518 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 14:15:57.783474   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 14:15:57.784494   94518 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 14:15:57.785442   94518 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 14:15:57.785814   94518 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:15:57.785850   94518 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 14:15:57.785866   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 14:15:57.785946   94518 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 14:15:57.786008   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.786341   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.795904   94518 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 14:15:57.795986   94518 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 14:15:57.796002   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 14:15:57.796075   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.797016   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 14:15:57.797107   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.797727   94518 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 14:15:57.797746   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 14:15:57.797798   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	W1018 14:15:57.799421   94518 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 14:15:57.802268   94518 addons.go:238] Setting addon default-storageclass=true in "addons-493618"
	I1018 14:15:57.802319   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.802790   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.803968   94518 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 14:15:57.806759   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.806881   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 14:15:57.807070   94518 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 14:15:57.807097   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 14:15:57.807159   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.809404   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 14:15:57.810905   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 14:15:57.812585   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 14:15:57.814158   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 14:15:57.817562   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 14:15:57.818954   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 14:15:57.820159   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 14:15:57.821469   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.822222   94518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 14:15:57.822661   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.825309   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 14:15:57.825341   94518 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 14:15:57.825404   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.843406   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.845448   94518 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 14:15:57.846549   94518 out.go:179]   - Using image docker.io/busybox:stable
	I1018 14:15:57.847761   94518 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 14:15:57.847936   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 14:15:57.848446   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.848859   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.862892   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.865577   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.865604   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.867128   94518 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 14:15:57.867148   94518 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 14:15:57.867202   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.870311   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.875963   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.876057   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.878232   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	W1018 14:15:57.891707   94518 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 14:15:57.891829   94518 retry.go:31] will retry after 359.382679ms: ssh: handshake failed: EOF
	I1018 14:15:57.896432   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.907502   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.909844   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.912211   94518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 14:15:57.988091   94518 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 14:15:57.988173   94518 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 14:15:57.997450   94518 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 14:15:57.997478   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 14:15:58.003508   94518 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 14:15:58.003538   94518 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 14:15:58.006239   94518 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 14:15:58.006263   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 14:15:58.015848   94518 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 14:15:58.015893   94518 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 14:15:58.020396   94518 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 14:15:58.020421   94518 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 14:15:58.024488   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 14:15:58.035697   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 14:15:58.035896   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 14:15:58.038172   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 14:15:58.041347   94518 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 14:15:58.041371   94518 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 14:15:58.049321   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 14:15:58.050245   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 14:15:58.052160   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 14:15:58.061988   94518 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:15:58.062019   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 14:15:58.069226   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 14:15:58.070543   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 14:15:58.074239   94518 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 14:15:58.074279   94518 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 14:15:58.079168   94518 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 14:15:58.079198   94518 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 14:15:58.092132   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:15:58.096100   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 14:15:58.102856   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 14:15:58.122432   94518 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 14:15:58.122460   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 14:15:58.133719   94518 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 14:15:58.133827   94518 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 14:15:58.178253   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 14:15:58.201737   94518 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 14:15:58.201955   94518 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 14:15:58.250630   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 14:15:58.250660   94518 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 14:15:58.257881   94518 node_ready.go:35] waiting up to 6m0s for node "addons-493618" to be "Ready" ...
	I1018 14:15:58.259987   94518 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 14:15:58.305869   94518 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 14:15:58.305892   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 14:15:58.372074   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 14:15:58.495259   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 14:15:58.495413   94518 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 14:15:58.542356   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 14:15:58.542459   94518 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 14:15:58.574546   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 14:15:58.574578   94518 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 14:15:58.610004   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 14:15:58.610119   94518 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 14:15:58.650707   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 14:15:58.650741   94518 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 14:15:58.689762   94518 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 14:15:58.689866   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 14:15:58.728580   94518 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 14:15:58.728663   94518 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 14:15:58.777291   94518 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 14:15:58.777320   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 14:15:58.779077   94518 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-493618" context rescaled to 1 replicas
	I1018 14:15:58.793294   94518 addons.go:479] Verifying addon registry=true in "addons-493618"
	I1018 14:15:58.795632   94518 out.go:179] * Verifying registry addon...
	I1018 14:15:58.797513   94518 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 14:15:58.802260   94518 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 14:15:58.802346   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 14:15:58.819478   94518 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 14:15:58.819580   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:15:58.840463   94518 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 14:15:58.840559   94518 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 14:15:58.884762   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 14:15:59.253579   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.21783586s)
	I1018 14:15:59.253646   94518 addons.go:479] Verifying addon ingress=true in "addons-493618"
	I1018 14:15:59.253649   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.217694877s)
	I1018 14:15:59.253724   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.215515463s)
	I1018 14:15:59.253830   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.204458532s)
	I1018 14:15:59.253862   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.203589981s)
	I1018 14:15:59.253978   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.184732075s)
	I1018 14:15:59.253955   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.201771193s)
	I1018 14:15:59.254125   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183543894s)
	I1018 14:15:59.254146   94518 addons.go:479] Verifying addon metrics-server=true in "addons-493618"
	I1018 14:15:59.254259   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.162092007s)
	I1018 14:15:59.254308   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.151427901s)
	W1018 14:15:59.254331   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:15:59.254361   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.076083963s)
	I1018 14:15:59.254360   94518 retry.go:31] will retry after 263.001722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:15:59.254285   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.158155774s)
	I1018 14:15:59.255381   94518 out.go:179] * Verifying ingress addon...
	I1018 14:15:59.256267   94518 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-493618 service yakd-dashboard -n yakd-dashboard
	
	I1018 14:15:59.258528   94518 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 14:15:59.262829   94518 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 14:15:59.262849   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 14:15:59.262881   94518 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1018 14:15:59.362679   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:15:59.517934   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:15:59.762348   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:15:59.767796   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.395673176s)
	W1018 14:15:59.767854   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 14:15:59.767878   94518 retry.go:31] will retry after 185.211057ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 14:15:59.768052   94518 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-493618"
	I1018 14:15:59.770042   94518 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 14:15:59.772172   94518 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 14:15:59.775895   94518 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 14:15:59.775932   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:15:59.862807   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:15:59.953296   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1018 14:16:00.179866   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:00.179932   94518 retry.go:31] will retry after 259.138229ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 14:16:00.261895   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:00.262066   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:00.276175   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:00.300887   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:00.439689   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:00.762081   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:00.862741   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:00.862953   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:01.262222   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:01.275838   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:01.300734   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:01.762110   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:01.862891   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:01.863056   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:02.261689   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:02.275594   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:02.300586   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:02.456467   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.50311722s)
	I1018 14:16:02.456599   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.016876084s)
	W1018 14:16:02.456633   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:02.456657   94518 retry.go:31] will retry after 555.919598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 14:16:02.761271   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:02.761679   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:02.862629   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:02.862696   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:03.013466   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:03.261821   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:03.275574   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:03.301416   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 14:16:03.558757   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:03.558796   94518 retry.go:31] will retry after 725.766019ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:03.761660   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:03.862928   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:03.862971   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:04.262257   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:04.275978   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:04.285123   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:04.301354   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:04.762331   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 14:16:04.844992   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:04.845023   94518 retry.go:31] will retry after 1.701988941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:04.862778   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:04.862875   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 14:16:05.261697   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:05.262238   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:05.275990   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:05.300734   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:05.366047   94518 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 14:16:05.366115   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:16:05.383978   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:16:05.493818   94518 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 14:16:05.506797   94518 addons.go:238] Setting addon gcp-auth=true in "addons-493618"
	I1018 14:16:05.506861   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:16:05.507286   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:16:05.523892   94518 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 14:16:05.523968   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:16:05.541453   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:16:05.636326   94518 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 14:16:05.637653   94518 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:16:05.638692   94518 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 14:16:05.638712   94518 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 14:16:05.652837   94518 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 14:16:05.652861   94518 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 14:16:05.666299   94518 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 14:16:05.666320   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 14:16:05.680085   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 14:16:05.761566   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:05.775505   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:05.801315   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:05.994641   94518 addons.go:479] Verifying addon gcp-auth=true in "addons-493618"
	I1018 14:16:05.996092   94518 out.go:179] * Verifying gcp-auth addon...
	I1018 14:16:05.998105   94518 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 14:16:06.000784   94518 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 14:16:06.000799   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:06.261679   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:06.275363   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:06.301313   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:06.501300   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:06.547370   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:06.762544   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:06.775122   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:06.801020   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:07.001387   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:07.102721   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:07.102751   94518 retry.go:31] will retry after 1.894325627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 14:16:07.261769   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:07.261821   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:07.275476   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:07.301602   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:07.501354   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:07.761681   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:07.775315   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:07.801142   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:08.000985   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:08.261664   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:08.275376   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:08.301438   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:08.501200   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:08.762331   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:08.779339   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:08.801735   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:08.997988   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:09.001098   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:09.261663   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:09.275805   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:09.300898   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:09.500718   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:09.549206   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:09.549247   94518 retry.go:31] will retry after 3.310963502s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:09.761098   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 14:16:09.761118   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:09.776183   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:09.800955   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:10.002285   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:10.261461   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:10.275203   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:10.300857   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:10.501789   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:10.762046   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:10.775575   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:10.801657   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:11.001278   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:11.261449   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:11.275212   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:11.301160   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:11.500880   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:11.761928   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:11.775663   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:11.800279   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:12.001764   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:12.261645   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:12.261934   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:12.275426   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:12.301237   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:12.501106   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:12.762500   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:12.775341   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:12.801069   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:12.861213   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:13.001726   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:13.261985   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:13.275741   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:13.300410   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 14:16:13.412655   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:13.412687   94518 retry.go:31] will retry after 2.146003967s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:13.501415   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:13.761464   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:13.775396   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:13.801074   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:14.001649   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:14.261663   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:14.275331   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:14.301036   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:14.500895   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:14.760721   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:14.762189   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:14.775457   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:14.801062   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:15.001069   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:15.261905   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:15.275163   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:15.300759   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:15.501790   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:15.558849   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:15.761297   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:15.775871   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:15.800389   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:16.001291   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:16.114482   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:16.114511   94518 retry.go:31] will retry after 5.173996473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:16.261692   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:16.275397   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:16.301389   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:16.500980   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:16.760795   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:16.762022   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:16.775519   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:16.801313   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:17.000944   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:17.261757   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:17.275325   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:17.300931   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:17.502121   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:17.761220   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:17.775796   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:17.800763   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:18.001822   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:18.261706   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:18.275401   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:18.301218   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:18.500894   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:18.761938   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:18.775652   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:18.800266   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:19.001007   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:19.261023   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:19.261757   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:19.275393   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:19.301127   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:19.500951   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:19.761787   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:19.775216   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:19.800787   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:20.001688   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:20.261951   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:20.275392   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:20.301151   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:20.501366   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:20.761599   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:20.776707   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:20.800198   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:21.001395   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:21.261329   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 14:16:21.261409   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:21.275153   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:21.289245   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:21.300476   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:21.501513   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:21.761123   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:21.775774   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:21.800635   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 14:16:21.851749   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:21.851778   94518 retry.go:31] will retry after 9.714380288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:22.001747   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:22.261813   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:22.275852   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:22.300396   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:22.501345   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:22.761740   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:22.775460   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:22.801088   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:23.000938   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:23.261494   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:23.275351   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:23.301186   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:23.501437   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:23.761231   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:23.761277   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:23.776153   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:23.800929   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:24.001798   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:24.261566   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:24.275231   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:24.300782   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:24.501826   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:24.761655   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:24.775311   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:24.801269   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:25.001202   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:25.261268   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:25.276037   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:25.300709   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:25.501743   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:25.761717   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:25.761933   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:25.775270   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:25.800968   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:26.001027   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:26.261514   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:26.275058   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:26.300235   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:26.500857   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:26.761281   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:26.775331   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:26.801253   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:27.001003   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:27.261650   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:27.275357   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:27.301224   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:27.501285   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:27.761635   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:27.775243   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:27.801161   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:28.001260   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:28.261155   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:28.261172   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:28.276267   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:28.300992   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:28.501784   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:28.761766   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:28.775549   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:28.801180   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:29.001049   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:29.261993   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:29.275515   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:29.301883   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:29.501469   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:29.761634   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:29.775146   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:29.801064   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:30.001967   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:30.261684   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:30.275382   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:30.301634   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:30.501572   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:30.761473   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:30.762048   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:30.775275   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:30.800997   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:31.000979   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:31.261897   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:31.275984   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:31.300628   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:31.501831   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:31.566932   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:31.761417   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:31.774979   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:31.800622   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:32.001291   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:32.118968   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:32.119002   94518 retry.go:31] will retry after 19.360841038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:32.261895   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:32.275779   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:32.304391   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:32.501587   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:32.761735   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:32.761898   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:32.775323   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:32.801370   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:33.001609   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:33.261584   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:33.275443   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:33.301126   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:33.501842   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:33.761935   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:33.774859   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:33.800261   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:34.001159   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:34.261227   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:34.275683   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:34.301293   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:34.501219   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:34.761634   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:34.775251   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:34.801016   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:35.002045   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:35.261059   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:35.262099   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:35.275492   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:35.301345   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:35.501646   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:35.761690   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:35.775306   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:35.800935   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:36.001009   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:36.261734   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:36.275232   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:36.300862   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:36.502157   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:36.761205   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:36.776410   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:36.801109   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:37.001783   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:37.261689   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:37.261744   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:37.275555   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:37.301669   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:37.501215   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:37.762014   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:37.775442   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:37.801110   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:38.000880   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:38.263391   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:38.275251   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:38.301068   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:38.501978   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:38.760157   94518 node_ready.go:49] node "addons-493618" is "Ready"
	I1018 14:16:38.760187   94518 node_ready.go:38] duration metric: took 40.502258296s for node "addons-493618" to be "Ready" ...
	I1018 14:16:38.760202   94518 api_server.go:52] waiting for apiserver process to appear ...
	I1018 14:16:38.760256   94518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 14:16:38.761614   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:38.775477   94518 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 14:16:38.775499   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:38.778619   94518 api_server.go:72] duration metric: took 41.111664217s to wait for apiserver process to appear ...
	I1018 14:16:38.778646   94518 api_server.go:88] waiting for apiserver healthz status ...
	I1018 14:16:38.778670   94518 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 14:16:38.782820   94518 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 14:16:38.783979   94518 api_server.go:141] control plane version: v1.34.1
	I1018 14:16:38.784055   94518 api_server.go:131] duration metric: took 5.400033ms to wait for apiserver health ...
	I1018 14:16:38.784069   94518 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 14:16:38.790511   94518 system_pods.go:59] 20 kube-system pods found
	I1018 14:16:38.790555   94518 system_pods.go:61] "amd-gpu-device-plugin-ps8fn" [212e7c35-e96b-474d-a839-a1234082db1e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:16:38.790566   94518 system_pods.go:61] "coredns-66bc5c9577-zsv4k" [f2e200e1-f869-49c9-9964-a7ce8b78fc36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:16:38.790574   94518 system_pods.go:61] "csi-hostpath-attacher-0" [3623b39a-4edb-4205-aeef-6f143a1226ca] Pending
	I1018 14:16:38.790580   94518 system_pods.go:61] "csi-hostpath-resizer-0" [d1f8fabd-93ae-47ef-9455-9609361e348d] Pending
	I1018 14:16:38.790589   94518 system_pods.go:61] "csi-hostpathplugin-t8ksl" [ed3177ab-2b66-47a9-8f89-40db56cbc332] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 14:16:38.790595   94518 system_pods.go:61] "etcd-addons-493618" [b9b0c453-cead-48a5-916f-b6743b403b4d] Running
	I1018 14:16:38.790602   94518 system_pods.go:61] "kindnet-vhk9j" [f447a047-b1f9-4b54-8f86-2bceeda6e6f2] Running
	I1018 14:16:38.790608   94518 system_pods.go:61] "kube-apiserver-addons-493618" [75f188ee-c145-4c6c-8fe4-ec8c6c92a91c] Running
	I1018 14:16:38.790613   94518 system_pods.go:61] "kube-controller-manager-addons-493618" [18b03cc0-9988-4bdf-b7a2-1c4c0a50ea7c] Running
	I1018 14:16:38.790621   94518 system_pods.go:61] "kube-ingress-dns-minikube" [d97bcf56-e215-486d-bd23-84d300a41f66] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:16:38.790626   94518 system_pods.go:61] "kube-proxy-5x2v2" [43716c06-fdd2-4f68-87f9-d90cf2f16440] Running
	I1018 14:16:38.790631   94518 system_pods.go:61] "kube-scheduler-addons-493618" [dc3fc718-9168-4264-aa9e-4ce985ac1d72] Running
	I1018 14:16:38.790638   94518 system_pods.go:61] "metrics-server-85b7d694d7-hzzlq" [93c9f378-d616-4060-a537-9060c4ce996a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:16:38.790647   94518 system_pods.go:61] "nvidia-device-plugin-daemonset-w9ks6" [0837d0f2-cef9-4ff6-b233-2547cb3c5f55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:16:38.790655   94518 system_pods.go:61] "registry-6b586f9694-pdjc2" [d5f69ebc-b615-4334-8fe1-593c6fe7f496] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:16:38.790665   94518 system_pods.go:61] "registry-creds-764b6fb674-czp24" [a3c3218a-127e-4d0d-90f6-a2b735fc7c5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:16:38.790681   94518 system_pods.go:61] "registry-proxy-dddz6" [55ecd78e-f023-433a-80aa-4a98aed74734] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:16:38.790688   94518 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8ftdc" [41ec4b8b-e0f1-4aed-a826-e4d50c52e35d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:38.790699   94518 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fcm6w" [fcafa25f-dac8-4c6c-9dee-6c155ff5f214] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:38.790706   94518 system_pods.go:61] "storage-provisioner" [0ce9ef3e-e1d3-4979-b307-1d30a38cfc5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:16:38.790714   94518 system_pods.go:74] duration metric: took 6.637048ms to wait for pod list to return data ...
	I1018 14:16:38.790727   94518 default_sa.go:34] waiting for default service account to be created ...
	I1018 14:16:38.813945   94518 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 14:16:38.813976   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:38.817277   94518 default_sa.go:45] found service account: "default"
	I1018 14:16:38.817303   94518 default_sa.go:55] duration metric: took 26.568684ms for default service account to be created ...
	I1018 14:16:38.817314   94518 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 14:16:38.836792   94518 system_pods.go:86] 20 kube-system pods found
	I1018 14:16:38.836840   94518 system_pods.go:89] "amd-gpu-device-plugin-ps8fn" [212e7c35-e96b-474d-a839-a1234082db1e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:16:38.836858   94518 system_pods.go:89] "coredns-66bc5c9577-zsv4k" [f2e200e1-f869-49c9-9964-a7ce8b78fc36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:16:38.836867   94518 system_pods.go:89] "csi-hostpath-attacher-0" [3623b39a-4edb-4205-aeef-6f143a1226ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 14:16:38.836875   94518 system_pods.go:89] "csi-hostpath-resizer-0" [d1f8fabd-93ae-47ef-9455-9609361e348d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 14:16:38.836883   94518 system_pods.go:89] "csi-hostpathplugin-t8ksl" [ed3177ab-2b66-47a9-8f89-40db56cbc332] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 14:16:38.836890   94518 system_pods.go:89] "etcd-addons-493618" [b9b0c453-cead-48a5-916f-b6743b403b4d] Running
	I1018 14:16:38.836900   94518 system_pods.go:89] "kindnet-vhk9j" [f447a047-b1f9-4b54-8f86-2bceeda6e6f2] Running
	I1018 14:16:38.836907   94518 system_pods.go:89] "kube-apiserver-addons-493618" [75f188ee-c145-4c6c-8fe4-ec8c6c92a91c] Running
	I1018 14:16:38.836927   94518 system_pods.go:89] "kube-controller-manager-addons-493618" [18b03cc0-9988-4bdf-b7a2-1c4c0a50ea7c] Running
	I1018 14:16:38.836935   94518 system_pods.go:89] "kube-ingress-dns-minikube" [d97bcf56-e215-486d-bd23-84d300a41f66] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:16:38.836944   94518 system_pods.go:89] "kube-proxy-5x2v2" [43716c06-fdd2-4f68-87f9-d90cf2f16440] Running
	I1018 14:16:38.836951   94518 system_pods.go:89] "kube-scheduler-addons-493618" [dc3fc718-9168-4264-aa9e-4ce985ac1d72] Running
	I1018 14:16:38.836958   94518 system_pods.go:89] "metrics-server-85b7d694d7-hzzlq" [93c9f378-d616-4060-a537-9060c4ce996a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:16:38.836970   94518 system_pods.go:89] "nvidia-device-plugin-daemonset-w9ks6" [0837d0f2-cef9-4ff6-b233-2547cb3c5f55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:16:38.836985   94518 system_pods.go:89] "registry-6b586f9694-pdjc2" [d5f69ebc-b615-4334-8fe1-593c6fe7f496] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:16:38.836997   94518 system_pods.go:89] "registry-creds-764b6fb674-czp24" [a3c3218a-127e-4d0d-90f6-a2b735fc7c5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:16:38.837005   94518 system_pods.go:89] "registry-proxy-dddz6" [55ecd78e-f023-433a-80aa-4a98aed74734] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:16:38.837016   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8ftdc" [41ec4b8b-e0f1-4aed-a826-e4d50c52e35d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:38.837026   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fcm6w" [fcafa25f-dac8-4c6c-9dee-6c155ff5f214] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:38.837036   94518 system_pods.go:89] "storage-provisioner" [0ce9ef3e-e1d3-4979-b307-1d30a38cfc5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:16:38.837060   94518 retry.go:31] will retry after 303.187947ms: missing components: kube-dns
	I1018 14:16:39.002953   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:39.146165   94518 system_pods.go:86] 20 kube-system pods found
	I1018 14:16:39.146209   94518 system_pods.go:89] "amd-gpu-device-plugin-ps8fn" [212e7c35-e96b-474d-a839-a1234082db1e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:16:39.146220   94518 system_pods.go:89] "coredns-66bc5c9577-zsv4k" [f2e200e1-f869-49c9-9964-a7ce8b78fc36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:16:39.146229   94518 system_pods.go:89] "csi-hostpath-attacher-0" [3623b39a-4edb-4205-aeef-6f143a1226ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 14:16:39.146237   94518 system_pods.go:89] "csi-hostpath-resizer-0" [d1f8fabd-93ae-47ef-9455-9609361e348d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 14:16:39.146245   94518 system_pods.go:89] "csi-hostpathplugin-t8ksl" [ed3177ab-2b66-47a9-8f89-40db56cbc332] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 14:16:39.146251   94518 system_pods.go:89] "etcd-addons-493618" [b9b0c453-cead-48a5-916f-b6743b403b4d] Running
	I1018 14:16:39.146257   94518 system_pods.go:89] "kindnet-vhk9j" [f447a047-b1f9-4b54-8f86-2bceeda6e6f2] Running
	I1018 14:16:39.146264   94518 system_pods.go:89] "kube-apiserver-addons-493618" [75f188ee-c145-4c6c-8fe4-ec8c6c92a91c] Running
	I1018 14:16:39.146270   94518 system_pods.go:89] "kube-controller-manager-addons-493618" [18b03cc0-9988-4bdf-b7a2-1c4c0a50ea7c] Running
	I1018 14:16:39.146285   94518 system_pods.go:89] "kube-ingress-dns-minikube" [d97bcf56-e215-486d-bd23-84d300a41f66] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:16:39.146293   94518 system_pods.go:89] "kube-proxy-5x2v2" [43716c06-fdd2-4f68-87f9-d90cf2f16440] Running
	I1018 14:16:39.146299   94518 system_pods.go:89] "kube-scheduler-addons-493618" [dc3fc718-9168-4264-aa9e-4ce985ac1d72] Running
	I1018 14:16:39.146311   94518 system_pods.go:89] "metrics-server-85b7d694d7-hzzlq" [93c9f378-d616-4060-a537-9060c4ce996a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:16:39.146320   94518 system_pods.go:89] "nvidia-device-plugin-daemonset-w9ks6" [0837d0f2-cef9-4ff6-b233-2547cb3c5f55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:16:39.146329   94518 system_pods.go:89] "registry-6b586f9694-pdjc2" [d5f69ebc-b615-4334-8fe1-593c6fe7f496] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:16:39.146342   94518 system_pods.go:89] "registry-creds-764b6fb674-czp24" [a3c3218a-127e-4d0d-90f6-a2b735fc7c5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:16:39.146354   94518 system_pods.go:89] "registry-proxy-dddz6" [55ecd78e-f023-433a-80aa-4a98aed74734] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:16:39.146362   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8ftdc" [41ec4b8b-e0f1-4aed-a826-e4d50c52e35d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:39.146372   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fcm6w" [fcafa25f-dac8-4c6c-9dee-6c155ff5f214] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:39.146381   94518 system_pods.go:89] "storage-provisioner" [0ce9ef3e-e1d3-4979-b307-1d30a38cfc5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:16:39.146407   94518 retry.go:31] will retry after 360.79099ms: missing components: kube-dns
	I1018 14:16:39.263006   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:39.276186   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:39.301149   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:39.502995   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:39.512628   94518 system_pods.go:86] 20 kube-system pods found
	I1018 14:16:39.512677   94518 system_pods.go:89] "amd-gpu-device-plugin-ps8fn" [212e7c35-e96b-474d-a839-a1234082db1e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:16:39.512690   94518 system_pods.go:89] "coredns-66bc5c9577-zsv4k" [f2e200e1-f869-49c9-9964-a7ce8b78fc36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:16:39.512702   94518 system_pods.go:89] "csi-hostpath-attacher-0" [3623b39a-4edb-4205-aeef-6f143a1226ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 14:16:39.512711   94518 system_pods.go:89] "csi-hostpath-resizer-0" [d1f8fabd-93ae-47ef-9455-9609361e348d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 14:16:39.512719   94518 system_pods.go:89] "csi-hostpathplugin-t8ksl" [ed3177ab-2b66-47a9-8f89-40db56cbc332] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 14:16:39.512726   94518 system_pods.go:89] "etcd-addons-493618" [b9b0c453-cead-48a5-916f-b6743b403b4d] Running
	I1018 14:16:39.512736   94518 system_pods.go:89] "kindnet-vhk9j" [f447a047-b1f9-4b54-8f86-2bceeda6e6f2] Running
	I1018 14:16:39.512742   94518 system_pods.go:89] "kube-apiserver-addons-493618" [75f188ee-c145-4c6c-8fe4-ec8c6c92a91c] Running
	I1018 14:16:39.512751   94518 system_pods.go:89] "kube-controller-manager-addons-493618" [18b03cc0-9988-4bdf-b7a2-1c4c0a50ea7c] Running
	I1018 14:16:39.512761   94518 system_pods.go:89] "kube-ingress-dns-minikube" [d97bcf56-e215-486d-bd23-84d300a41f66] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:16:39.512770   94518 system_pods.go:89] "kube-proxy-5x2v2" [43716c06-fdd2-4f68-87f9-d90cf2f16440] Running
	I1018 14:16:39.512776   94518 system_pods.go:89] "kube-scheduler-addons-493618" [dc3fc718-9168-4264-aa9e-4ce985ac1d72] Running
	I1018 14:16:39.512785   94518 system_pods.go:89] "metrics-server-85b7d694d7-hzzlq" [93c9f378-d616-4060-a537-9060c4ce996a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:16:39.512798   94518 system_pods.go:89] "nvidia-device-plugin-daemonset-w9ks6" [0837d0f2-cef9-4ff6-b233-2547cb3c5f55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:16:39.512809   94518 system_pods.go:89] "registry-6b586f9694-pdjc2" [d5f69ebc-b615-4334-8fe1-593c6fe7f496] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:16:39.512817   94518 system_pods.go:89] "registry-creds-764b6fb674-czp24" [a3c3218a-127e-4d0d-90f6-a2b735fc7c5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:16:39.512828   94518 system_pods.go:89] "registry-proxy-dddz6" [55ecd78e-f023-433a-80aa-4a98aed74734] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:16:39.512838   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8ftdc" [41ec4b8b-e0f1-4aed-a826-e4d50c52e35d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:39.512850   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fcm6w" [fcafa25f-dac8-4c6c-9dee-6c155ff5f214] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:39.512858   94518 system_pods.go:89] "storage-provisioner" [0ce9ef3e-e1d3-4979-b307-1d30a38cfc5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:16:39.512881   94518 retry.go:31] will retry after 432.482193ms: missing components: kube-dns
	I1018 14:16:39.762902   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:39.776402   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:39.801542   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:39.950641   94518 system_pods.go:86] 20 kube-system pods found
	I1018 14:16:39.950687   94518 system_pods.go:89] "amd-gpu-device-plugin-ps8fn" [212e7c35-e96b-474d-a839-a1234082db1e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:16:39.950695   94518 system_pods.go:89] "coredns-66bc5c9577-zsv4k" [f2e200e1-f869-49c9-9964-a7ce8b78fc36] Running
	I1018 14:16:39.950708   94518 system_pods.go:89] "csi-hostpath-attacher-0" [3623b39a-4edb-4205-aeef-6f143a1226ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 14:16:39.950716   94518 system_pods.go:89] "csi-hostpath-resizer-0" [d1f8fabd-93ae-47ef-9455-9609361e348d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 14:16:39.950726   94518 system_pods.go:89] "csi-hostpathplugin-t8ksl" [ed3177ab-2b66-47a9-8f89-40db56cbc332] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 14:16:39.950733   94518 system_pods.go:89] "etcd-addons-493618" [b9b0c453-cead-48a5-916f-b6743b403b4d] Running
	I1018 14:16:39.950743   94518 system_pods.go:89] "kindnet-vhk9j" [f447a047-b1f9-4b54-8f86-2bceeda6e6f2] Running
	I1018 14:16:39.950755   94518 system_pods.go:89] "kube-apiserver-addons-493618" [75f188ee-c145-4c6c-8fe4-ec8c6c92a91c] Running
	I1018 14:16:39.950767   94518 system_pods.go:89] "kube-controller-manager-addons-493618" [18b03cc0-9988-4bdf-b7a2-1c4c0a50ea7c] Running
	I1018 14:16:39.950776   94518 system_pods.go:89] "kube-ingress-dns-minikube" [d97bcf56-e215-486d-bd23-84d300a41f66] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:16:39.950795   94518 system_pods.go:89] "kube-proxy-5x2v2" [43716c06-fdd2-4f68-87f9-d90cf2f16440] Running
	I1018 14:16:39.950805   94518 system_pods.go:89] "kube-scheduler-addons-493618" [dc3fc718-9168-4264-aa9e-4ce985ac1d72] Running
	I1018 14:16:39.950813   94518 system_pods.go:89] "metrics-server-85b7d694d7-hzzlq" [93c9f378-d616-4060-a537-9060c4ce996a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:16:39.950825   94518 system_pods.go:89] "nvidia-device-plugin-daemonset-w9ks6" [0837d0f2-cef9-4ff6-b233-2547cb3c5f55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:16:39.950837   94518 system_pods.go:89] "registry-6b586f9694-pdjc2" [d5f69ebc-b615-4334-8fe1-593c6fe7f496] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:16:39.950844   94518 system_pods.go:89] "registry-creds-764b6fb674-czp24" [a3c3218a-127e-4d0d-90f6-a2b735fc7c5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:16:39.950855   94518 system_pods.go:89] "registry-proxy-dddz6" [55ecd78e-f023-433a-80aa-4a98aed74734] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:16:39.950864   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8ftdc" [41ec4b8b-e0f1-4aed-a826-e4d50c52e35d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:39.950878   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fcm6w" [fcafa25f-dac8-4c6c-9dee-6c155ff5f214] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:39.950883   94518 system_pods.go:89] "storage-provisioner" [0ce9ef3e-e1d3-4979-b307-1d30a38cfc5e] Running
	I1018 14:16:39.950903   94518 system_pods.go:126] duration metric: took 1.133578445s to wait for k8s-apps to be running ...
	I1018 14:16:39.950927   94518 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 14:16:39.950986   94518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 14:16:39.969681   94518 system_svc.go:56] duration metric: took 18.745966ms WaitForService to wait for kubelet
	I1018 14:16:39.969710   94518 kubeadm.go:586] duration metric: took 42.30276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 14:16:39.969733   94518 node_conditions.go:102] verifying NodePressure condition ...
	I1018 14:16:39.972886   94518 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 14:16:39.972931   94518 node_conditions.go:123] node cpu capacity is 8
	I1018 14:16:39.972952   94518 node_conditions.go:105] duration metric: took 3.212854ms to run NodePressure ...
	I1018 14:16:39.972976   94518 start.go:241] waiting for startup goroutines ...
	I1018 14:16:40.002066   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:40.262894   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:40.276088   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:40.300675   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:40.501979   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:40.762663   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:40.775357   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:40.801162   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:41.001217   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:41.263258   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:41.276712   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:41.302030   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:41.501566   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:41.763346   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:41.776428   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:41.864042   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:42.002413   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:42.261523   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:42.275424   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:42.301128   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:42.501233   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:42.762642   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:42.775674   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:42.801398   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:43.002340   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:43.262615   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:43.275813   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:43.301739   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:43.501955   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:43.762643   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:43.775323   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:43.801232   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:44.000775   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:44.262060   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:44.276189   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:44.300886   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:44.502251   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:44.764473   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:44.778601   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:44.801574   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:45.002597   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:45.262417   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:45.276012   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:45.300998   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:45.502358   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:45.762909   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:45.776217   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:45.801374   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:46.001819   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:46.262735   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:46.276581   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:46.301959   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:46.502478   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:46.762137   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:46.776205   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:46.800977   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:47.002011   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:47.263363   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:47.275692   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:47.301849   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:47.502303   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:47.762097   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:47.776163   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:47.801288   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:48.001490   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:48.261703   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:48.276059   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:48.301046   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:48.501699   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:48.762014   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:48.776050   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:48.801136   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:49.003122   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:49.262958   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:49.276638   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:49.301711   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:49.504298   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:49.762891   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:49.776580   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:49.801807   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:50.002042   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:50.262618   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:50.275672   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:50.301314   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:50.501039   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:50.762127   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:50.775584   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:50.801981   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:51.002167   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:51.263088   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:51.276354   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:51.301136   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:51.480427   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:51.502052   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:51.762057   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:51.775898   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:51.801122   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:52.000897   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:52.028927   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:52.028967   94518 retry.go:31] will retry after 23.730297472s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:52.262296   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:52.276403   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:52.301168   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:52.502234   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:52.762724   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:52.776030   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:52.800809   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:53.002194   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:53.263147   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:53.276322   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:53.301440   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:53.501640   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:53.762159   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:53.780927   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:53.801573   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:54.001940   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:54.262129   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:54.275901   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:54.300784   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:54.502117   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:54.762236   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:54.863504   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:54.863546   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:55.001421   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:55.263239   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:55.276598   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:55.301592   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:55.502021   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:55.762642   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:55.775215   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:55.801168   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:56.001789   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:56.262562   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:56.276012   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:56.301105   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:56.501757   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:56.762498   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:56.842533   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:56.842884   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:57.002277   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:57.263015   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:57.275626   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:57.301290   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:57.501174   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:57.764069   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:57.777805   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:57.802024   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:58.001971   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:58.262456   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:58.276292   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:58.301340   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:58.501658   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:58.763184   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:58.776640   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:58.801759   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:59.002068   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:59.275369   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:59.276620   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:59.301023   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:59.501710   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:59.763756   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:59.865186   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:59.865222   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:00.002706   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:00.265539   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:00.279599   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:00.301880   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:00.502335   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:00.763538   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:00.775930   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:00.801897   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:01.002519   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:01.262026   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:01.276130   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:01.362572   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:01.501369   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:01.763644   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:01.779108   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:01.801020   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:02.001535   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:02.262634   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:02.276612   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:02.303963   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:02.501305   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:02.762496   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:02.776181   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:02.801068   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:03.002743   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:03.262111   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:03.276934   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:03.300828   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:03.504229   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:03.763691   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:03.776119   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:03.800631   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:04.003713   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:04.262687   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:04.276482   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:04.301743   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:04.502068   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:04.763078   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:04.776689   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:04.802101   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:05.001886   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:05.262410   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:05.276337   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:05.307319   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:05.501644   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:05.762053   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:05.776369   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:05.801797   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:06.002447   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:06.262193   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:06.275849   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:06.302174   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:06.502353   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:06.762956   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:06.776611   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:06.801155   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:07.001449   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:07.262841   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:07.276120   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:07.301192   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:07.502865   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:07.762883   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:07.776486   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:07.801984   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:08.002204   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:08.262684   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:08.275841   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:08.300609   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:08.501552   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:08.761868   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:08.777284   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:08.801575   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:09.002088   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:09.262321   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:09.275116   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:09.300794   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:09.502103   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:09.763105   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:09.775593   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:09.802027   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:10.002530   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:10.262721   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:10.363567   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:10.363604   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:10.501248   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:10.762594   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:10.775272   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:10.828298   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:11.002160   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:11.262832   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:11.275989   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:11.300855   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:11.504707   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:11.762245   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:11.776332   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:11.801408   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:12.002170   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:12.262266   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:12.276626   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:12.301680   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:12.502059   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:12.762293   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:12.776456   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:12.801320   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:13.001785   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:13.262871   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:13.276298   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:13.302882   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:13.503814   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:13.762416   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:13.844416   94518 kapi.go:107] duration metric: took 1m15.046903502s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 14:17:13.845081   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:14.002739   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:14.262420   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:14.276625   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:14.501876   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:14.763082   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:14.776373   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:15.002215   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:15.262541   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:15.275951   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:15.503027   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:15.759384   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:17:15.762301   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:15.776692   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:16.002515   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:16.262732   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:16.275795   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 14:17:16.451253   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:17:16.451302   94518 retry.go:31] will retry after 39.128992898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:17:16.501604   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:16.763396   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:16.775487   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:17.004186   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:17.262984   94518 kapi.go:107] duration metric: took 1m18.00445624s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 14:17:17.276176   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:17.501480   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:17.776270   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:18.002634   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:18.276586   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:18.501658   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:18.776313   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:19.001775   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:19.276193   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:19.502728   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:19.776495   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:20.000907   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:20.276522   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:20.501176   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:20.775718   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:21.002256   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:21.276110   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:21.502718   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:21.776475   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:22.001349   94518 kapi.go:107] duration metric: took 1m16.00324245s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 14:17:22.003029   94518 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-493618 cluster.
	I1018 14:17:22.004220   94518 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 14:17:22.005269   94518 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 14:17:22.276180   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:22.776487   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:23.276181   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:23.777075   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:24.308479   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:24.777192   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:25.275835   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:25.777029   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:26.276794   94518 kapi.go:107] duration metric: took 1m26.504622464s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 14:17:55.584930   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 14:17:56.123857   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 14:17:56.124019   94518 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 14:17:56.126955   94518 out.go:179] * Enabled addons: storage-provisioner, cloud-spanner, registry-creds, ingress-dns, metrics-server, nvidia-device-plugin, amd-gpu-device-plugin, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1018 14:17:56.127992   94518 addons.go:514] duration metric: took 1m58.460970758s for enable addons: enabled=[storage-provisioner cloud-spanner registry-creds ingress-dns metrics-server nvidia-device-plugin amd-gpu-device-plugin yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1018 14:17:56.128052   94518 start.go:246] waiting for cluster config update ...
	I1018 14:17:56.128083   94518 start.go:255] writing updated cluster config ...
	I1018 14:17:56.128406   94518 ssh_runner.go:195] Run: rm -f paused
	I1018 14:17:56.132411   94518 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 14:17:56.136263   94518 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zsv4k" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.140509   94518 pod_ready.go:94] pod "coredns-66bc5c9577-zsv4k" is "Ready"
	I1018 14:17:56.140532   94518 pod_ready.go:86] duration metric: took 4.248281ms for pod "coredns-66bc5c9577-zsv4k" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.142491   94518 pod_ready.go:83] waiting for pod "etcd-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.146289   94518 pod_ready.go:94] pod "etcd-addons-493618" is "Ready"
	I1018 14:17:56.146311   94518 pod_ready.go:86] duration metric: took 3.8003ms for pod "etcd-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.148001   94518 pod_ready.go:83] waiting for pod "kube-apiserver-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.151493   94518 pod_ready.go:94] pod "kube-apiserver-addons-493618" is "Ready"
	I1018 14:17:56.151516   94518 pod_ready.go:86] duration metric: took 3.485308ms for pod "kube-apiserver-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.153295   94518 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.536543   94518 pod_ready.go:94] pod "kube-controller-manager-addons-493618" is "Ready"
	I1018 14:17:56.536571   94518 pod_ready.go:86] duration metric: took 383.254622ms for pod "kube-controller-manager-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.736793   94518 pod_ready.go:83] waiting for pod "kube-proxy-5x2v2" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:57.136427   94518 pod_ready.go:94] pod "kube-proxy-5x2v2" is "Ready"
	I1018 14:17:57.136456   94518 pod_ready.go:86] duration metric: took 399.638474ms for pod "kube-proxy-5x2v2" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:57.336271   94518 pod_ready.go:83] waiting for pod "kube-scheduler-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:57.736585   94518 pod_ready.go:94] pod "kube-scheduler-addons-493618" is "Ready"
	I1018 14:17:57.736613   94518 pod_ready.go:86] duration metric: took 400.31858ms for pod "kube-scheduler-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:57.736623   94518 pod_ready.go:40] duration metric: took 1.604180528s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 14:17:57.782211   94518 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 14:17:57.783876   94518 out.go:179] * Done! kubectl is now configured to use "addons-493618" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 14:21:15 addons-493618 crio[781]: time="2025-10-18T14:21:15.724181085Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 18 14:21:47 addons-493618 crio[781]: time="2025-10-18T14:21:47.063004217Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=7d413390-ca76-47d5-a701-3b49bfd5fcef name=/runtime.v1.ImageService/PullImage
	Oct 18 14:21:47 addons-493618 crio[781]: time="2025-10-18T14:21:47.066933975Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 18 14:22:18 addons-493618 crio[781]: time="2025-10-18T14:22:18.396636829Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 18 14:22:25 addons-493618 crio[781]: time="2025-10-18T14:22:25.400134611Z" level=info msg="Pulling image: docker.io/nginx:latest" id=c8c2c595-8674-4a2d-ae5c-de6f47e4464e name=/runtime.v1.ImageService/PullImage
	Oct 18 14:22:25 addons-493618 crio[781]: time="2025-10-18T14:22:25.401574675Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 18 14:22:26 addons-493618 crio[781]: time="2025-10-18T14:22:26.083599232Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=1e3c0d76-173e-4e24-a0b0-04bc4a89d11e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:22:26 addons-493618 crio[781]: time="2025-10-18T14:22:26.083749509Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=1e3c0d76-173e-4e24-a0b0-04bc4a89d11e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:22:26 addons-493618 crio[781]: time="2025-10-18T14:22:26.083797404Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=1e3c0d76-173e-4e24-a0b0-04bc4a89d11e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:22:40 addons-493618 crio[781]: time="2025-10-18T14:22:40.528654069Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c7f81962-5bf4-4a7a-8ac3-6991a4705d62 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:22:40 addons-493618 crio[781]: time="2025-10-18T14:22:40.528847437Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=c7f81962-5bf4-4a7a-8ac3-6991a4705d62 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:22:40 addons-493618 crio[781]: time="2025-10-18T14:22:40.528896478Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=c7f81962-5bf4-4a7a-8ac3-6991a4705d62 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:22:56 addons-493618 crio[781]: time="2025-10-18T14:22:56.732217417Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 18 14:23:28 addons-493618 crio[781]: time="2025-10-18T14:23:28.068631092Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=c60f9ed0-071d-4e99-b730-f71a4fe8059e name=/runtime.v1.ImageService/PullImage
	Oct 18 14:23:28 addons-493618 crio[781]: time="2025-10-18T14:23:28.08070178Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 18 14:23:59 addons-493618 crio[781]: time="2025-10-18T14:23:59.415756176Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 18 14:24:14 addons-493618 crio[781]: time="2025-10-18T14:24:14.60237699Z" level=info msg="Pulling image: docker.io/nginx:latest" id=0c942f56-29e4-401f-84f8-0be3494f7bda name=/runtime.v1.ImageService/PullImage
	Oct 18 14:24:14 addons-493618 crio[781]: time="2025-10-18T14:24:14.60400575Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 18 14:24:26 addons-493618 crio[781]: time="2025-10-18T14:24:26.527976121Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c3db508a-3a15-4ea9-a633-a0178676ba35 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:24:26 addons-493618 crio[781]: time="2025-10-18T14:24:26.528151961Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=c3db508a-3a15-4ea9-a633-a0178676ba35 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:24:26 addons-493618 crio[781]: time="2025-10-18T14:24:26.528195038Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=c3db508a-3a15-4ea9-a633-a0178676ba35 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:24:40 addons-493618 crio[781]: time="2025-10-18T14:24:40.528600308Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=bc91c587-0d90-4fda-b81a-32c81d6f6e91 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:24:40 addons-493618 crio[781]: time="2025-10-18T14:24:40.528805786Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=bc91c587-0d90-4fda-b81a-32c81d6f6e91 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:24:40 addons-493618 crio[781]: time="2025-10-18T14:24:40.528858257Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=bc91c587-0d90-4fda-b81a-32c81d6f6e91 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:24:45 addons-493618 crio[781]: time="2025-10-18T14:24:45.942300768Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	a2d90a4bb564c       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             6 minutes ago       Running             registry-creds                           0                   c7208d1abb3e0       registry-creds-764b6fb674-czp24             kube-system
	38eab5508e267       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              6 minutes ago       Running             nginx                                    0                   d9445a77ecd5a       nginx                                       default
	f0ed3f5d6ffa8       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          6 minutes ago       Running             busybox                                  0                   097945ff6ffef       busybox                                     default
	fcb7161ee1d1b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          7 minutes ago       Running             csi-snapshotter                          0                   d3d06bcc5d099       csi-hostpathplugin-t8ksl                    kube-system
	8f357a51c6b5d       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   d3d06bcc5d099       csi-hostpathplugin-t8ksl                    kube-system
	530e145d6c2e0       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   d3d06bcc5d099       csi-hostpathplugin-t8ksl                    kube-system
	84cd4c11831db       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   d3d06bcc5d099       csi-hostpathplugin-t8ksl                    kube-system
	edfb43ced2e1e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 7 minutes ago       Running             gcp-auth                                 0                   0c4aa9fe754c5       gcp-auth-78565c9fb4-mwgsp                   gcp-auth
	fcf3c24788988       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            7 minutes ago       Running             gadget                                   0                   0c73b5d5a20a9       gadget-vm8lx                                gadget
	10ae25ecd1d90       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   d3d06bcc5d099       csi-hostpathplugin-t8ksl                    kube-system
	45501fab46f05       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             7 minutes ago       Running             controller                               0                   3e90b0db82f21       ingress-nginx-controller-675c5ddd98-sndwh   ingress-nginx
	50a19f5b596d4       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              7 minutes ago       Running             registry-proxy                           0                   5ce9bbd315430       registry-proxy-dddz6                        kube-system
	859d5d72eef12       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     7 minutes ago       Running             amd-gpu-device-plugin                    0                   ce015c134568b       amd-gpu-device-plugin-ps8fn                 kube-system
	78aea4ac76ed2       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     7 minutes ago       Running             nvidia-device-plugin-ctr                 0                   d601227de066c       nvidia-device-plugin-daemonset-w9ks6        kube-system
	775733aea8bf0       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   d3d06bcc5d099       csi-hostpathplugin-t8ksl                    kube-system
	32ea63c74de31       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              7 minutes ago       Running             yakd                                     0                   06ef25b517353       yakd-dashboard-5ff678cb9-cqgkj              yakd-dashboard
	6673efa077656       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              8 minutes ago       Running             csi-resizer                              0                   dec41ec76cd03       csi-hostpath-resizer-0                      kube-system
	89679d50a3910       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             8 minutes ago       Running             csi-attacher                             0                   0048d743f42d1       csi-hostpath-attacher-0                     kube-system
	c52d44cde4f71       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      8 minutes ago       Running             volume-snapshot-controller               0                   b534c52d0c84c       snapshot-controller-7d9fbc56b8-fcm6w        kube-system
	6883ad86fcecd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   8 minutes ago       Exited              patch                                    0                   a08859b82414b       ingress-nginx-admission-patch-vxb5f         ingress-nginx
	a9e1fbf487f51       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               8 minutes ago       Running             cloud-spanner-emulator                   0                   69532574c7971       cloud-spanner-emulator-86bd5cbb97-2nxxs     default
	8e896cc7ee32d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   8 minutes ago       Exited              create                                   0                   f011cb8ba518a       ingress-nginx-admission-create-tnv6j        ingress-nginx
	92ceaca691f51       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      8 minutes ago       Running             volume-snapshot-controller               0                   2ee75d4e4001f       snapshot-controller-7d9fbc56b8-8ftdc        kube-system
	da0ddb2d0550b       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               8 minutes ago       Running             minikube-ingress-dns                     0                   c8aaf317eece5       kube-ingress-dns-minikube                   kube-system
	79474cdc2efcd       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             8 minutes ago       Running             local-path-provisioner                   0                   89516e7730f54       local-path-provisioner-648f6765c9-xgggg     local-path-storage
	a51f3eea29502       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           8 minutes ago       Running             registry                                 0                   97f317fc1b5dc       registry-6b586f9694-pdjc2                   kube-system
	ca1869e801d6e       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        8 minutes ago       Running             metrics-server                           0                   8f3ce70811032       metrics-server-85b7d694d7-hzzlq             kube-system
	7fc1c430e912b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             8 minutes ago       Running             coredns                                  0                   4107d196d2062       coredns-66bc5c9577-zsv4k                    kube-system
	d41651660ae84       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago       Running             storage-provisioner                      0                   ffc42416a6b3e       storage-provisioner                         kube-system
	778f4f35207fc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             8 minutes ago       Running             kindnet-cni                              0                   5b6cacbfc954b       kindnet-vhk9j                               kube-system
	fc19fe3563e01       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             8 minutes ago       Running             kube-proxy                               0                   ff4d1c0bbd1d6       kube-proxy-5x2v2                            kube-system
	f616a2d4df678       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             9 minutes ago       Running             kube-apiserver                           0                   9bbc44f90a4b5       kube-apiserver-addons-493618                kube-system
	411a5716e9150       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             9 minutes ago       Running             etcd                                     0                   56968af9a8607       etcd-addons-493618                          kube-system
	857014c2e77ee       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             9 minutes ago       Running             kube-scheduler                           0                   3e0b656b74b60       kube-scheduler-addons-493618                kube-system
	aa8c1cbd9ac9c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             9 minutes ago       Running             kube-controller-manager                  0                   a4c04910854cf       kube-controller-manager-addons-493618       kube-system
	
	
	==> coredns [7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca] <==
	[INFO] 10.244.0.20:52581 - 1894 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000104746s
	[INFO] 10.244.0.20:59573 - 6451 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000099707s
	[INFO] 10.244.0.20:59573 - 59819 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00009792s
	[INFO] 10.244.0.20:59573 - 52671 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000084426s
	[INFO] 10.244.0.20:59573 - 64935 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000079707s
	[INFO] 10.244.0.20:59573 - 22804 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000117348s
	[INFO] 10.244.0.20:43535 - 51274 "A IN hello-world-app.default.svc.cluster.local.local. udp 65 false 512" NXDOMAIN qr,rd,ra 65 0.002699188s
	[INFO] 10.244.0.20:59573 - 15457 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000129689s
	[INFO] 10.244.0.20:43535 - 44312 "AAAA IN hello-world-app.default.svc.cluster.local.local. udp 65 false 512" NXDOMAIN qr,rd,ra 65 0.002623561s
	[INFO] 10.244.0.20:43535 - 19523 "A IN hello-world-app.default.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 201 0.000090962s
	[INFO] 10.244.0.20:43535 - 42897 "AAAA IN hello-world-app.default.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 201 0.000076246s
	[INFO] 10.244.0.20:43535 - 57781 "A IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,aa,rd,ra 188 0.000065497s
	[INFO] 10.244.0.20:43535 - 50051 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,aa,rd,ra 188 0.000058079s
	[INFO] 10.244.0.20:43535 - 18865 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,aa,rd,ra 180 0.000075619s
	[INFO] 10.244.0.20:43535 - 44870 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,aa,rd,ra 180 0.000057369s
	[INFO] 10.244.0.20:59573 - 18996 "A IN hello-world-app.default.svc.cluster.local.local. udp 65 false 512" NXDOMAIN qr,rd,ra 65 0.003596192s
	[INFO] 10.244.0.20:43535 - 54692 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000124408s
	[INFO] 10.244.0.20:59573 - 31686 "AAAA IN hello-world-app.default.svc.cluster.local.local. udp 65 false 512" NXDOMAIN qr,rd,ra 65 0.002318423s
	[INFO] 10.244.0.20:59573 - 5286 "A IN hello-world-app.default.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 201 0.000061223s
	[INFO] 10.244.0.20:59573 - 42263 "AAAA IN hello-world-app.default.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 201 0.000067099s
	[INFO] 10.244.0.20:59573 - 47568 "A IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,aa,rd,ra 188 0.000062292s
	[INFO] 10.244.0.20:59573 - 47417 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,aa,rd,ra 188 0.000070454s
	[INFO] 10.244.0.20:59573 - 20218 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,aa,rd,ra 180 0.00009213s
	[INFO] 10.244.0.20:59573 - 13620 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,aa,rd,ra 180 0.000057412s
	[INFO] 10.244.0.20:59573 - 39759 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000111251s
	
	
	==> describe nodes <==
	Name:               addons-493618
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-493618
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=addons-493618
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T14_15_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-493618
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-493618"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 14:15:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-493618
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 14:24:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 14:21:18 +0000   Sat, 18 Oct 2025 14:15:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 14:21:18 +0000   Sat, 18 Oct 2025 14:15:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 14:21:18 +0000   Sat, 18 Oct 2025 14:15:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 14:21:18 +0000   Sat, 18 Oct 2025 14:16:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-493618
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                c99ec94e-dad8-466b-986d-f557d98b8e1c
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (30 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m59s
	  default                     cloud-spanner-emulator-86bd5cbb97-2nxxs      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m59s
	  default                     hello-world-app-5d498dc89-9gb5k              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m45s
	  default                     task-pv-pod-restore                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  gadget                      gadget-vm8lx                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m58s
	  gcp-auth                    gcp-auth-78565c9fb4-mwgsp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m52s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-sndwh    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         8m58s
	  kube-system                 amd-gpu-device-plugin-ps8fn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 coredns-66bc5c9577-zsv4k                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m58s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m58s
	  kube-system                 csi-hostpathplugin-t8ksl                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 etcd-addons-493618                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m6s
	  kube-system                 kindnet-vhk9j                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m
	  kube-system                 kube-apiserver-addons-493618                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m6s
	  kube-system                 kube-controller-manager-addons-493618        200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m6s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m59s
	  kube-system                 kube-proxy-5x2v2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m
	  kube-system                 kube-scheduler-addons-493618                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m6s
	  kube-system                 metrics-server-85b7d694d7-hzzlq              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         8m59s
	  kube-system                 nvidia-device-plugin-daemonset-w9ks6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 registry-6b586f9694-pdjc2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m59s
	  kube-system                 registry-creds-764b6fb674-czp24              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m59s
	  kube-system                 registry-proxy-dddz6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 snapshot-controller-7d9fbc56b8-8ftdc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m58s
	  kube-system                 snapshot-controller-7d9fbc56b8-fcm6w         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m58s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m59s
	  local-path-storage          local-path-provisioner-648f6765c9-xgggg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m59s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-cqgkj               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     8m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m59s  kube-proxy       
	  Normal  Starting                 9m6s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m6s   kubelet          Node addons-493618 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m6s   kubelet          Node addons-493618 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m6s   kubelet          Node addons-493618 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m1s   node-controller  Node addons-493618 event: Registered Node addons-493618 in Controller
	  Normal  NodeReady                8m19s  kubelet          Node addons-493618 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.096767] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026410] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.055938] kauditd_printk_skb: 47 callbacks suppressed
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5] <==
	{"level":"warn","ts":"2025-10-18T14:15:48.524192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.530308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.536786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.546053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.559657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.566802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.575632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.584037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.591784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.605020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.612481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.619606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.634187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.637964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.644321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.650704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.695116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:16:00.196257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:16:00.202493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:16:26.281250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:16:26.287738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:16:26.308478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:16:26.315202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39426","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T14:16:59.273487Z","caller":"traceutil/trace.go:172","msg":"trace[603722411] transaction","detail":"{read_only:false; response_revision:1051; number_of_response:1; }","duration":"100.664551ms","start":"2025-10-18T14:16:59.172784Z","end":"2025-10-18T14:16:59.273449Z","steps":["trace[603722411] 'process raft request'  (duration: 100.381339ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:17:24.306442Z","caller":"traceutil/trace.go:172","msg":"trace[1562610933] transaction","detail":"{read_only:false; response_revision:1203; number_of_response:1; }","duration":"100.640382ms","start":"2025-10-18T14:17:24.205781Z","end":"2025-10-18T14:17:24.306422Z","steps":["trace[1562610933] 'process raft request'  (duration: 64.205106ms)","trace[1562610933] 'compare'  (duration: 36.281867ms)"],"step_count":2}
	
	
	==> gcp-auth [edfb43ced2e1e4c4fbb178805c38e20bf5073a4864e99ecf580aa951e010b54f] <==
	2025/10/18 14:17:21 GCP Auth Webhook started!
	2025/10/18 14:17:58 Ready to marshal response ...
	2025/10/18 14:17:58 Ready to write response ...
	2025/10/18 14:17:58 Ready to marshal response ...
	2025/10/18 14:17:58 Ready to write response ...
	2025/10/18 14:17:58 Ready to marshal response ...
	2025/10/18 14:17:58 Ready to write response ...
	2025/10/18 14:18:12 Ready to marshal response ...
	2025/10/18 14:18:12 Ready to write response ...
	2025/10/18 14:18:16 Ready to marshal response ...
	2025/10/18 14:18:16 Ready to write response ...
	2025/10/18 14:18:20 Ready to marshal response ...
	2025/10/18 14:18:20 Ready to write response ...
	2025/10/18 14:18:20 Ready to marshal response ...
	2025/10/18 14:18:20 Ready to write response ...
	2025/10/18 14:18:23 Ready to marshal response ...
	2025/10/18 14:18:23 Ready to write response ...
	2025/10/18 14:18:31 Ready to marshal response ...
	2025/10/18 14:18:31 Ready to write response ...
	2025/10/18 14:18:55 Ready to marshal response ...
	2025/10/18 14:18:55 Ready to write response ...
	2025/10/18 14:20:36 Ready to marshal response ...
	2025/10/18 14:20:36 Ready to write response ...
	
	
	==> kernel <==
	 14:24:57 up  2:07,  0 user,  load average: 0.43, 0.93, 1.95
	Linux addons-493618 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750] <==
	I1018 14:22:48.061250       1 main.go:301] handling current node
	I1018 14:22:58.092800       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:22:58.092833       1 main.go:301] handling current node
	I1018 14:23:08.061225       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:23:08.061258       1 main.go:301] handling current node
	I1018 14:23:18.060864       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:23:18.060900       1 main.go:301] handling current node
	I1018 14:23:28.061231       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:23:28.061262       1 main.go:301] handling current node
	I1018 14:23:38.060859       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:23:38.060925       1 main.go:301] handling current node
	I1018 14:23:48.061154       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:23:48.061186       1 main.go:301] handling current node
	I1018 14:23:58.061351       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:23:58.061382       1 main.go:301] handling current node
	I1018 14:24:08.060783       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:24:08.060819       1 main.go:301] handling current node
	I1018 14:24:18.061216       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:24:18.061257       1 main.go:301] handling current node
	I1018 14:24:28.061313       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:24:28.061365       1 main.go:301] handling current node
	I1018 14:24:38.060572       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:24:38.060970       1 main.go:301] handling current node
	I1018 14:24:48.061335       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:24:48.061377       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4] <==
	W1018 14:16:26.315041       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 14:16:38.576682       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.149.8:443: connect: connection refused
	E1018 14:16:38.576731       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.149.8:443: connect: connection refused" logger="UnhandledError"
	W1018 14:16:38.576868       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.149.8:443: connect: connection refused
	E1018 14:16:38.576902       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.149.8:443: connect: connection refused" logger="UnhandledError"
	W1018 14:16:38.600334       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.149.8:443: connect: connection refused
	E1018 14:16:38.600374       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.149.8:443: connect: connection refused" logger="UnhandledError"
	W1018 14:16:38.600902       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.149.8:443: connect: connection refused
	E1018 14:16:38.600965       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.149.8:443: connect: connection refused" logger="UnhandledError"
	E1018 14:16:41.703457       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.161.37:443: connect: connection refused" logger="UnhandledError"
	W1018 14:16:41.703665       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 14:16:41.703731       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1018 14:16:41.704079       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.161.37:443: connect: connection refused" logger="UnhandledError"
	E1018 14:16:41.709516       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.161.37:443: connect: connection refused" logger="UnhandledError"
	E1018 14:16:41.731124       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.161.37:443: connect: connection refused" logger="UnhandledError"
	I1018 14:16:41.803282       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 14:18:06.446462       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36446: use of closed network connection
	E1018 14:18:06.603755       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36468: use of closed network connection
	I1018 14:18:12.402027       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 14:18:12.584964       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.131.156"}
	I1018 14:18:34.708350       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1018 14:20:36.111613       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.185.2"}
	
	
	==> kube-controller-manager [aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8] <==
	I1018 14:15:56.264599       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 14:15:56.264698       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 14:15:56.265922       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 14:15:56.268232       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 14:15:56.268288       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 14:15:56.268335       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 14:15:56.268348       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 14:15:56.268355       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 14:15:56.268387       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 14:15:56.269609       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:15:56.269629       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 14:15:56.269638       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 14:15:56.269971       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 14:15:56.275422       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-493618" podCIDRs=["10.244.0.0/24"]
	I1018 14:15:56.277385       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 14:15:56.289378       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 14:15:58.850088       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1018 14:16:26.274934       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 14:16:26.275118       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 14:16:26.275191       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 14:16:26.299136       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 14:16:26.302741       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 14:16:26.376108       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 14:16:26.403598       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:16:41.219427       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa] <==
	I1018 14:15:57.532244       1 server_linux.go:53] "Using iptables proxy"
	I1018 14:15:57.592753       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 14:15:57.697045       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:15:57.697101       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 14:15:57.697216       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:15:57.841695       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 14:15:57.841901       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:15:57.911876       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:15:57.922658       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:15:57.939484       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:15:57.952373       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:15:57.952400       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:15:57.952456       1 config.go:200] "Starting service config controller"
	I1018 14:15:57.952467       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:15:57.952500       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:15:57.952508       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:15:57.954225       1 config.go:309] "Starting node config controller"
	I1018 14:15:57.954269       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:15:57.954278       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:15:58.053620       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 14:15:58.053669       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:15:58.053697       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1] <==
	E1018 14:15:49.134247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:15:49.134258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:15:49.134330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 14:15:49.134307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:15:49.134338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 14:15:49.134328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 14:15:49.134351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:15:49.134453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:15:49.134460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 14:15:49.946543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 14:15:49.998890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:15:50.032174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:15:50.063609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:15:50.072057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 14:15:50.134634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 14:15:50.154988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:15:50.166165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 14:15:50.179329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 14:15:50.235814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 14:15:50.269111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:15:50.270159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:15:50.295510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 14:15:50.353863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 14:15:50.392021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1018 14:15:52.930460       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 14:21:47 addons-493618 kubelet[1280]: E1018 14:21:47.062799    1280 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod-restore_default(55a04f24-70ab-4ed9-9957-f15ef2c7f034): ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 14:21:47 addons-493618 kubelet[1280]: E1018 14:21:47.062866    1280 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod-restore" podUID="55a04f24-70ab-4ed9-9957-f15ef2c7f034"
	Oct 18 14:21:59 addons-493618 kubelet[1280]: E1018 14:21:59.528812    1280 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod-restore" podUID="55a04f24-70ab-4ed9-9957-f15ef2c7f034"
	Oct 18 14:22:20 addons-493618 kubelet[1280]: I1018 14:22:20.528240    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-w9ks6" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:22:25 addons-493618 kubelet[1280]: E1018 14:22:25.399587    1280 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://kicbase/echo-server:1.0: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image" image="docker.io/kicbase/echo-server:1.0"
	Oct 18 14:22:25 addons-493618 kubelet[1280]: E1018 14:22:25.399655    1280 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://kicbase/echo-server:1.0: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image" image="docker.io/kicbase/echo-server:1.0"
	Oct 18 14:22:25 addons-493618 kubelet[1280]: E1018 14:22:25.399887    1280 kuberuntime_manager.go:1449] "Unhandled Error" err="container hello-world-app start failed in pod hello-world-app-5d498dc89-9gb5k_default(d9bf04c9-933f-480e-a7d0-77e9398aab3c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kicbase/echo-server:1.0: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image" logger="UnhandledError"
	Oct 18 14:22:25 addons-493618 kubelet[1280]: E1018 14:22:25.399961    1280 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://kicbase/echo-server:1.0: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="default/hello-world-app-5d498dc89-9gb5k" podUID="d9bf04c9-933f-480e-a7d0-77e9398aab3c"
	Oct 18 14:22:26 addons-493618 kubelet[1280]: E1018 14:22:26.084214    1280 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kicbase/echo-server:1.0: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="default/hello-world-app-5d498dc89-9gb5k" podUID="d9bf04c9-933f-480e-a7d0-77e9398aab3c"
	Oct 18 14:22:50 addons-493618 kubelet[1280]: I1018 14:22:50.528169    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ps8fn" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:22:53 addons-493618 kubelet[1280]: I1018 14:22:53.527446    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-dddz6" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:23:28 addons-493618 kubelet[1280]: E1018 14:23:28.068059    1280 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 18 14:23:28 addons-493618 kubelet[1280]: E1018 14:23:28.068130    1280 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 18 14:23:28 addons-493618 kubelet[1280]: E1018 14:23:28.068382    1280 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod-restore_default(55a04f24-70ab-4ed9-9957-f15ef2c7f034): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 14:23:28 addons-493618 kubelet[1280]: E1018 14:23:28.068450    1280 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod-restore" podUID="55a04f24-70ab-4ed9-9957-f15ef2c7f034"
	Oct 18 14:23:39 addons-493618 kubelet[1280]: E1018 14:23:39.528455    1280 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod-restore" podUID="55a04f24-70ab-4ed9-9957-f15ef2c7f034"
	Oct 18 14:23:47 addons-493618 kubelet[1280]: I1018 14:23:47.528367    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-w9ks6" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:23:54 addons-493618 kubelet[1280]: E1018 14:23:54.528634    1280 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod-restore" podUID="55a04f24-70ab-4ed9-9957-f15ef2c7f034"
	Oct 18 14:24:10 addons-493618 kubelet[1280]: I1018 14:24:10.527616    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-dddz6" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:24:14 addons-493618 kubelet[1280]: E1018 14:24:14.601824    1280 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://kicbase/echo-server:1.0: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image" image="docker.io/kicbase/echo-server:1.0"
	Oct 18 14:24:14 addons-493618 kubelet[1280]: E1018 14:24:14.601890    1280 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://kicbase/echo-server:1.0: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image" image="docker.io/kicbase/echo-server:1.0"
	Oct 18 14:24:14 addons-493618 kubelet[1280]: E1018 14:24:14.602115    1280 kuberuntime_manager.go:1449] "Unhandled Error" err="container hello-world-app start failed in pod hello-world-app-5d498dc89-9gb5k_default(d9bf04c9-933f-480e-a7d0-77e9398aab3c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kicbase/echo-server:1.0: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image" logger="UnhandledError"
	Oct 18 14:24:14 addons-493618 kubelet[1280]: E1018 14:24:14.602173    1280 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://kicbase/echo-server:1.0: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="default/hello-world-app-5d498dc89-9gb5k" podUID="d9bf04c9-933f-480e-a7d0-77e9398aab3c"
	Oct 18 14:24:17 addons-493618 kubelet[1280]: I1018 14:24:17.528225    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ps8fn" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:24:26 addons-493618 kubelet[1280]: E1018 14:24:26.528604    1280 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kicbase/echo-server:1.0: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="default/hello-world-app-5d498dc89-9gb5k" podUID="d9bf04c9-933f-480e-a7d0-77e9398aab3c"
	
	
	==> storage-provisioner [d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e] <==
	W1018 14:24:33.118757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:35.122015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:35.125946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:37.128984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:37.134309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:39.137617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:39.141490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:41.145113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:41.149052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:43.153195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:43.157003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:45.161314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:45.166432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:47.170217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:47.174284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:49.177677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:49.182968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:51.185856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:51.190814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:53.194082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:53.198102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:55.201435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:55.206788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:57.210575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:24:57.215617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-493618 -n addons-493618
helpers_test.go:269: (dbg) Run:  kubectl --context addons-493618 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-9gb5k task-pv-pod-restore ingress-nginx-admission-create-tnv6j ingress-nginx-admission-patch-vxb5f
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-493618 describe pod hello-world-app-5d498dc89-9gb5k task-pv-pod-restore ingress-nginx-admission-create-tnv6j ingress-nginx-admission-patch-vxb5f
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-493618 describe pod hello-world-app-5d498dc89-9gb5k task-pv-pod-restore ingress-nginx-admission-create-tnv6j ingress-nginx-admission-patch-vxb5f: exit status 1 (75.373522ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-9gb5k
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-493618/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:20:36 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:           10.244.0.32
	Controlled By:  ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n7mqh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n7mqh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m22s                default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-9gb5k to addons-493618
	  Warning  Failed     44s (x2 over 2m33s)  kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": unable to pull image or OCI artifact: pull image err: initializing source docker://kicbase/echo-server:1.0: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image
	  Warning  Failed     44s (x2 over 2m33s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    32s (x2 over 2m32s)  kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     32s (x2 over 2m32s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    18s (x3 over 4m22s)  kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-493618/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:18:55 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lwrqd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-lwrqd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-493618
	  Warning  Failed     3m11s                kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     90s (x2 over 4m59s)  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     90s (x3 over 4m59s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    64s (x4 over 4m58s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     64s (x4 over 4m58s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    49s (x4 over 6m3s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-tnv6j" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vxb5f" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-493618 describe pod hello-world-app-5d498dc89-9gb5k task-pv-pod-restore ingress-nginx-admission-create-tnv6j ingress-nginx-admission-patch-vxb5f: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-493618 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (240.812838ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:24:58.314976  113014 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:24:58.315246  113014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:24:58.315256  113014 out.go:374] Setting ErrFile to fd 2...
	I1018 14:24:58.315262  113014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:24:58.315479  113014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:24:58.315780  113014 mustload.go:65] Loading cluster: addons-493618
	I1018 14:24:58.316163  113014 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:24:58.316183  113014 addons.go:606] checking whether the cluster is paused
	I1018 14:24:58.316266  113014 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:24:58.316278  113014 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:24:58.316683  113014 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:24:58.336449  113014 ssh_runner.go:195] Run: systemctl --version
	I1018 14:24:58.336516  113014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:24:58.355690  113014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:24:58.452909  113014 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:24:58.453015  113014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:24:58.483531  113014 cri.go:89] found id: "a2d90a4bb564c43991d5a0c84c81880730aa5a76930e356ff3a20d99954e1b06"
	I1018 14:24:58.483554  113014 cri.go:89] found id: "fcb7161ee1d1b988b7ab1002f35293f0b7091c9bac0d7bde7e8471426df926cf"
	I1018 14:24:58.483558  113014 cri.go:89] found id: "8f357a51c6b5dca76ccd31166d38783da025c5cbb4407847372de403612f8902"
	I1018 14:24:58.483561  113014 cri.go:89] found id: "530e145d6c2e072e1f708ba6909c358faf2489d1fb8aa08aa89f54faf841dbbf"
	I1018 14:24:58.483564  113014 cri.go:89] found id: "84cd4c11831db761e4325adee4879bf6a046512882f96bb1ca62c4882e6c8939"
	I1018 14:24:58.483566  113014 cri.go:89] found id: "10ae25ecd1d9000b05b1943468a2e37d7d7ad234a9cbda7cef9a4926bc9a192c"
	I1018 14:24:58.483569  113014 cri.go:89] found id: "50a19f5b596d4f200db4d0ab0cd9f04fbb94a5ef09f512aeefc81c7d4920b424"
	I1018 14:24:58.483571  113014 cri.go:89] found id: "859d5d72eef12d0f592a58b42cf0be1853b423f2c91a731d854b74ff79470e8f"
	I1018 14:24:58.483574  113014 cri.go:89] found id: "78aea4ac76ed251743d7541eaa9c63acd9c8c2af1858f2862ef1bb654b9423b3"
	I1018 14:24:58.483579  113014 cri.go:89] found id: "775733aea8bf0d456fffa861d7bbf1e97ecc24f79c468dcc0c8f760bfe0b6df0"
	I1018 14:24:58.483582  113014 cri.go:89] found id: "6673efa0776562914d4e7c35e4a4a121d60c7796edfc736b01120598fdf6dfda"
	I1018 14:24:58.483586  113014 cri.go:89] found id: "89679d50a3910712c81dd83e3ab5d310452f13a23c0dce3d8afeb4abec58b99f"
	I1018 14:24:58.483598  113014 cri.go:89] found id: "c52d44cde4f7100b3b7100db0eda92a6716001140bcb111fc1b8bad7bcffcd87"
	I1018 14:24:58.483603  113014 cri.go:89] found id: "92ceaca691f5147122ab362b3936a7201bede1ae2ac6288d04e5d0641f150e2f"
	I1018 14:24:58.483606  113014 cri.go:89] found id: "da0ddb2d0550b3553349e63e5796b3d59ac8c5a7d574c962118808b4750f4934"
	I1018 14:24:58.483612  113014 cri.go:89] found id: "a51f3eea295020493bcd77872d74756d33249e7b25591c8967346f770e49a885"
	I1018 14:24:58.483616  113014 cri.go:89] found id: "ca1869e801d6ed194d599a52234484412f6d9de897b6eadae30ca18325c92762"
	I1018 14:24:58.483622  113014 cri.go:89] found id: "7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca"
	I1018 14:24:58.483626  113014 cri.go:89] found id: "d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e"
	I1018 14:24:58.483630  113014 cri.go:89] found id: "778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750"
	I1018 14:24:58.483634  113014 cri.go:89] found id: "fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa"
	I1018 14:24:58.483637  113014 cri.go:89] found id: "f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4"
	I1018 14:24:58.483641  113014 cri.go:89] found id: "411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5"
	I1018 14:24:58.483646  113014 cri.go:89] found id: "857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1"
	I1018 14:24:58.483650  113014 cri.go:89] found id: "aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8"
	I1018 14:24:58.483657  113014 cri.go:89] found id: ""
	I1018 14:24:58.483703  113014 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 14:24:58.498186  113014 out.go:203] 
	W1018 14:24:58.499613  113014 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:24:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:24:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 14:24:58.499650  113014 out.go:285] * 
	* 
	W1018 14:24:58.504801  113014 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 14:24:58.506392  113014 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-493618 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-493618 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (241.039387ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:24:58.561490  113089 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:24:58.561726  113089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:24:58.561734  113089 out.go:374] Setting ErrFile to fd 2...
	I1018 14:24:58.561738  113089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:24:58.561957  113089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:24:58.562253  113089 mustload.go:65] Loading cluster: addons-493618
	I1018 14:24:58.562582  113089 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:24:58.562597  113089 addons.go:606] checking whether the cluster is paused
	I1018 14:24:58.562676  113089 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:24:58.562688  113089 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:24:58.563082  113089 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:24:58.580654  113089 ssh_runner.go:195] Run: systemctl --version
	I1018 14:24:58.580712  113089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:24:58.598990  113089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:24:58.694645  113089 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:24:58.694738  113089 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:24:58.724187  113089 cri.go:89] found id: "a2d90a4bb564c43991d5a0c84c81880730aa5a76930e356ff3a20d99954e1b06"
	I1018 14:24:58.724210  113089 cri.go:89] found id: "fcb7161ee1d1b988b7ab1002f35293f0b7091c9bac0d7bde7e8471426df926cf"
	I1018 14:24:58.724214  113089 cri.go:89] found id: "8f357a51c6b5dca76ccd31166d38783da025c5cbb4407847372de403612f8902"
	I1018 14:24:58.724217  113089 cri.go:89] found id: "530e145d6c2e072e1f708ba6909c358faf2489d1fb8aa08aa89f54faf841dbbf"
	I1018 14:24:58.724220  113089 cri.go:89] found id: "84cd4c11831db761e4325adee4879bf6a046512882f96bb1ca62c4882e6c8939"
	I1018 14:24:58.724223  113089 cri.go:89] found id: "10ae25ecd1d9000b05b1943468a2e37d7d7ad234a9cbda7cef9a4926bc9a192c"
	I1018 14:24:58.724226  113089 cri.go:89] found id: "50a19f5b596d4f200db4d0ab0cd9f04fbb94a5ef09f512aeefc81c7d4920b424"
	I1018 14:24:58.724240  113089 cri.go:89] found id: "859d5d72eef12d0f592a58b42cf0be1853b423f2c91a731d854b74ff79470e8f"
	I1018 14:24:58.724243  113089 cri.go:89] found id: "78aea4ac76ed251743d7541eaa9c63acd9c8c2af1858f2862ef1bb654b9423b3"
	I1018 14:24:58.724248  113089 cri.go:89] found id: "775733aea8bf0d456fffa861d7bbf1e97ecc24f79c468dcc0c8f760bfe0b6df0"
	I1018 14:24:58.724250  113089 cri.go:89] found id: "6673efa0776562914d4e7c35e4a4a121d60c7796edfc736b01120598fdf6dfda"
	I1018 14:24:58.724253  113089 cri.go:89] found id: "89679d50a3910712c81dd83e3ab5d310452f13a23c0dce3d8afeb4abec58b99f"
	I1018 14:24:58.724256  113089 cri.go:89] found id: "c52d44cde4f7100b3b7100db0eda92a6716001140bcb111fc1b8bad7bcffcd87"
	I1018 14:24:58.724258  113089 cri.go:89] found id: "92ceaca691f5147122ab362b3936a7201bede1ae2ac6288d04e5d0641f150e2f"
	I1018 14:24:58.724261  113089 cri.go:89] found id: "da0ddb2d0550b3553349e63e5796b3d59ac8c5a7d574c962118808b4750f4934"
	I1018 14:24:58.724265  113089 cri.go:89] found id: "a51f3eea295020493bcd77872d74756d33249e7b25591c8967346f770e49a885"
	I1018 14:24:58.724268  113089 cri.go:89] found id: "ca1869e801d6ed194d599a52234484412f6d9de897b6eadae30ca18325c92762"
	I1018 14:24:58.724271  113089 cri.go:89] found id: "7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca"
	I1018 14:24:58.724274  113089 cri.go:89] found id: "d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e"
	I1018 14:24:58.724276  113089 cri.go:89] found id: "778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750"
	I1018 14:24:58.724278  113089 cri.go:89] found id: "fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa"
	I1018 14:24:58.724281  113089 cri.go:89] found id: "f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4"
	I1018 14:24:58.724283  113089 cri.go:89] found id: "411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5"
	I1018 14:24:58.724285  113089 cri.go:89] found id: "857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1"
	I1018 14:24:58.724288  113089 cri.go:89] found id: "aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8"
	I1018 14:24:58.724290  113089 cri.go:89] found id: ""
	I1018 14:24:58.724328  113089 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 14:24:58.739651  113089 out.go:203] 
	W1018 14:24:58.741085  113089 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:24:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:24:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 14:24:58.741111  113089 out.go:285] * 
	* 
	W1018 14:24:58.746406  113089 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 14:24:58.747876  113089 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-493618 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (400.42s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-493618 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-493618 --alsologtostderr -v=1: exit status 11 (235.541781ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:18:06.893291  103616 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:18:06.893587  103616 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:06.893598  103616 out.go:374] Setting ErrFile to fd 2...
	I1018 14:18:06.893603  103616 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:06.893805  103616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:18:06.894124  103616 mustload.go:65] Loading cluster: addons-493618
	I1018 14:18:06.894463  103616 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:06.894480  103616 addons.go:606] checking whether the cluster is paused
	I1018 14:18:06.894554  103616 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:06.894567  103616 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:18:06.894972  103616 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:18:06.912361  103616 ssh_runner.go:195] Run: systemctl --version
	I1018 14:18:06.912429  103616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:18:06.929440  103616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:18:07.025784  103616 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:18:07.025851  103616 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:18:07.055201  103616 cri.go:89] found id: "fcb7161ee1d1b988b7ab1002f35293f0b7091c9bac0d7bde7e8471426df926cf"
	I1018 14:18:07.055225  103616 cri.go:89] found id: "8f357a51c6b5dca76ccd31166d38783da025c5cbb4407847372de403612f8902"
	I1018 14:18:07.055230  103616 cri.go:89] found id: "530e145d6c2e072e1f708ba6909c358faf2489d1fb8aa08aa89f54faf841dbbf"
	I1018 14:18:07.055234  103616 cri.go:89] found id: "84cd4c11831db761e4325adee4879bf6a046512882f96bb1ca62c4882e6c8939"
	I1018 14:18:07.055238  103616 cri.go:89] found id: "10ae25ecd1d9000b05b1943468a2e37d7d7ad234a9cbda7cef9a4926bc9a192c"
	I1018 14:18:07.055246  103616 cri.go:89] found id: "50a19f5b596d4f200db4d0ab0cd9f04fbb94a5ef09f512aeefc81c7d4920b424"
	I1018 14:18:07.055249  103616 cri.go:89] found id: "859d5d72eef12d0f592a58b42cf0be1853b423f2c91a731d854b74ff79470e8f"
	I1018 14:18:07.055253  103616 cri.go:89] found id: "78aea4ac76ed251743d7541eaa9c63acd9c8c2af1858f2862ef1bb654b9423b3"
	I1018 14:18:07.055256  103616 cri.go:89] found id: "775733aea8bf0d456fffa861d7bbf1e97ecc24f79c468dcc0c8f760bfe0b6df0"
	I1018 14:18:07.055273  103616 cri.go:89] found id: "6673efa0776562914d4e7c35e4a4a121d60c7796edfc736b01120598fdf6dfda"
	I1018 14:18:07.055278  103616 cri.go:89] found id: "89679d50a3910712c81dd83e3ab5d310452f13a23c0dce3d8afeb4abec58b99f"
	I1018 14:18:07.055282  103616 cri.go:89] found id: "c52d44cde4f7100b3b7100db0eda92a6716001140bcb111fc1b8bad7bcffcd87"
	I1018 14:18:07.055287  103616 cri.go:89] found id: "92ceaca691f5147122ab362b3936a7201bede1ae2ac6288d04e5d0641f150e2f"
	I1018 14:18:07.055292  103616 cri.go:89] found id: "da0ddb2d0550b3553349e63e5796b3d59ac8c5a7d574c962118808b4750f4934"
	I1018 14:18:07.055297  103616 cri.go:89] found id: "a51f3eea295020493bcd77872d74756d33249e7b25591c8967346f770e49a885"
	I1018 14:18:07.055304  103616 cri.go:89] found id: "ca1869e801d6ed194d599a52234484412f6d9de897b6eadae30ca18325c92762"
	I1018 14:18:07.055309  103616 cri.go:89] found id: "7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca"
	I1018 14:18:07.055315  103616 cri.go:89] found id: "d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e"
	I1018 14:18:07.055319  103616 cri.go:89] found id: "778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750"
	I1018 14:18:07.055322  103616 cri.go:89] found id: "fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa"
	I1018 14:18:07.055326  103616 cri.go:89] found id: "f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4"
	I1018 14:18:07.055329  103616 cri.go:89] found id: "411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5"
	I1018 14:18:07.055333  103616 cri.go:89] found id: "857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1"
	I1018 14:18:07.055337  103616 cri.go:89] found id: "aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8"
	I1018 14:18:07.055340  103616 cri.go:89] found id: ""
	I1018 14:18:07.055390  103616 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 14:18:07.069463  103616 out.go:203] 
	W1018 14:18:07.070731  103616 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 14:18:07.070750  103616 out.go:285] * 
	* 
	W1018 14:18:07.075701  103616 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 14:18:07.077241  103616 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-493618 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-493618
helpers_test.go:243: (dbg) docker inspect addons-493618:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748",
	        "Created": "2025-10-18T14:15:35.142040375Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 95181,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T14:15:35.183965001Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748/hosts",
	        "LogPath": "/var/lib/docker/containers/7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748/7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748-json.log",
	        "Name": "/addons-493618",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-493618:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-493618",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7b0baa1647a963297669b05718eb2ea04f74f573bfce570969968f96503e0748",
	                "LowerDir": "/var/lib/docker/overlay2/6c53b42c7ae30be0a92aeee9616153aa98a76b8686b1ba574fed988eef723540-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c53b42c7ae30be0a92aeee9616153aa98a76b8686b1ba574fed988eef723540/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c53b42c7ae30be0a92aeee9616153aa98a76b8686b1ba574fed988eef723540/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c53b42c7ae30be0a92aeee9616153aa98a76b8686b1ba574fed988eef723540/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-493618",
	                "Source": "/var/lib/docker/volumes/addons-493618/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-493618",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-493618",
	                "name.minikube.sigs.k8s.io": "addons-493618",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a631e0cd76d05941fb0936045345b47fc87f5c3a110522f5c55a7218ec039637",
	            "SandboxKey": "/var/run/docker/netns/a631e0cd76d0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-493618": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:eb:b6:c3:02:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d904be0aa70c1af2cea11004150f1e24caa7082b6124c61db9de726e07acfb2f",
	                    "EndpointID": "8a31c67497c108fe079824c35877145f7cc3de3038048bb81926ece73d316513",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-493618",
	                        "7b0baa1647a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-493618 -n addons-493618
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-493618 logs -n 25: (1.146251994s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-498093 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-498093   │ jenkins │ v1.37.0 │ 18 Oct 25 14:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │ 18 Oct 25 14:15 UTC │
	│ delete  │ -p download-only-498093                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-498093   │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │ 18 Oct 25 14:15 UTC │
	│ start   │ -o=json --download-only -p download-only-142592 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-142592   │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │ 18 Oct 25 14:15 UTC │
	│ delete  │ -p download-only-142592                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-142592   │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │ 18 Oct 25 14:15 UTC │
	│ delete  │ -p download-only-498093                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-498093   │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │ 18 Oct 25 14:15 UTC │
	│ delete  │ -p download-only-142592                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-142592   │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │ 18 Oct 25 14:15 UTC │
	│ start   │ --download-only -p download-docker-735106 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-735106 │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │                     │
	│ delete  │ -p download-docker-735106                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-735106 │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │ 18 Oct 25 14:15 UTC │
	│ start   │ --download-only -p binary-mirror-035412 --alsologtostderr --binary-mirror http://127.0.0.1:38181 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-035412   │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │                     │
	│ delete  │ -p binary-mirror-035412                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-035412   │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │ 18 Oct 25 14:15 UTC │
	│ addons  │ disable dashboard -p addons-493618                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │                     │
	│ addons  │ enable dashboard -p addons-493618                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │                     │
	│ start   │ -p addons-493618 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │ 18 Oct 25 14:17 UTC │
	│ addons  │ addons-493618 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:17 UTC │                     │
	│ addons  │ addons-493618 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	│ addons  │ enable headlamp -p addons-493618 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-493618          │ jenkins │ v1.37.0 │ 18 Oct 25 14:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 14:15:10
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 14:15:10.844195   94518 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:15:10.844315   94518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:15:10.844327   94518 out.go:374] Setting ErrFile to fd 2...
	I1018 14:15:10.844333   94518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:15:10.844524   94518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:15:10.845093   94518 out.go:368] Setting JSON to false
	I1018 14:15:10.845947   94518 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7062,"bootTime":1760789849,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:15:10.846045   94518 start.go:141] virtualization: kvm guest
	I1018 14:15:10.847714   94518 out.go:179] * [addons-493618] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 14:15:10.849170   94518 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 14:15:10.849206   94518 notify.go:220] Checking for updates...
	I1018 14:15:10.851802   94518 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:15:10.852939   94518 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 14:15:10.854257   94518 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 14:15:10.855457   94518 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 14:15:10.856592   94518 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 14:15:10.857794   94518 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:15:10.881142   94518 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 14:15:10.881259   94518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:15:10.937968   94518 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-18 14:15:10.928477658 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:15:10.938071   94518 docker.go:318] overlay module found
	I1018 14:15:10.939805   94518 out.go:179] * Using the docker driver based on user configuration
	I1018 14:15:10.941011   94518 start.go:305] selected driver: docker
	I1018 14:15:10.941024   94518 start.go:925] validating driver "docker" against <nil>
	I1018 14:15:10.941035   94518 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 14:15:10.941568   94518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:15:10.999497   94518 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-18 14:15:10.990143183 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:15:10.999700   94518 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 14:15:10.999943   94518 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 14:15:11.001690   94518 out.go:179] * Using Docker driver with root privileges
	I1018 14:15:11.002970   94518 cni.go:84] Creating CNI manager for ""
	I1018 14:15:11.003053   94518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 14:15:11.003064   94518 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 14:15:11.003145   94518 start.go:349] cluster config:
	{Name:addons-493618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-493618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1018 14:15:11.004498   94518 out.go:179] * Starting "addons-493618" primary control-plane node in "addons-493618" cluster
	I1018 14:15:11.005651   94518 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 14:15:11.006976   94518 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 14:15:11.008175   94518 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:15:11.008218   94518 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 14:15:11.008231   94518 cache.go:58] Caching tarball of preloaded images
	I1018 14:15:11.008228   94518 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 14:15:11.008318   94518 preload.go:233] Found /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 14:15:11.008329   94518 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 14:15:11.008714   94518 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/config.json ...
	I1018 14:15:11.008737   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/config.json: {Name:mkdee9574b0b95000e535daf1bcb85983e767ae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:11.024821   94518 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 14:15:11.024970   94518 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 14:15:11.024989   94518 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 14:15:11.024994   94518 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 14:15:11.025001   94518 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 14:15:11.025006   94518 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 14:15:23.525530   94518 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 14:15:23.525596   94518 cache.go:232] Successfully downloaded all kic artifacts
	I1018 14:15:23.525645   94518 start.go:360] acquireMachinesLock for addons-493618: {Name:mkcf1dcaefe933480e3898dd01dccab4476df687 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 14:15:23.525773   94518 start.go:364] duration metric: took 97.675µs to acquireMachinesLock for "addons-493618"
	I1018 14:15:23.525804   94518 start.go:93] Provisioning new machine with config: &{Name:addons-493618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-493618 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 14:15:23.525942   94518 start.go:125] createHost starting for "" (driver="docker")
	I1018 14:15:23.527896   94518 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 14:15:23.528207   94518 start.go:159] libmachine.API.Create for "addons-493618" (driver="docker")
	I1018 14:15:23.528245   94518 client.go:168] LocalClient.Create starting
	I1018 14:15:23.528363   94518 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem
	I1018 14:15:23.977885   94518 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem
	I1018 14:15:24.038227   94518 cli_runner.go:164] Run: docker network inspect addons-493618 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 14:15:24.054247   94518 cli_runner.go:211] docker network inspect addons-493618 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 14:15:24.054314   94518 network_create.go:284] running [docker network inspect addons-493618] to gather additional debugging logs...
	I1018 14:15:24.054332   94518 cli_runner.go:164] Run: docker network inspect addons-493618
	W1018 14:15:24.070008   94518 cli_runner.go:211] docker network inspect addons-493618 returned with exit code 1
	I1018 14:15:24.070042   94518 network_create.go:287] error running [docker network inspect addons-493618]: docker network inspect addons-493618: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-493618 not found
	I1018 14:15:24.070073   94518 network_create.go:289] output of [docker network inspect addons-493618]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-493618 not found
	
	** /stderr **
	I1018 14:15:24.070206   94518 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 14:15:24.087173   94518 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e55a00}
	I1018 14:15:24.087222   94518 network_create.go:124] attempt to create docker network addons-493618 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 14:15:24.087280   94518 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-493618 addons-493618
	I1018 14:15:24.145261   94518 network_create.go:108] docker network addons-493618 192.168.49.0/24 created
	I1018 14:15:24.145291   94518 kic.go:121] calculated static IP "192.168.49.2" for the "addons-493618" container
	I1018 14:15:24.145378   94518 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 14:15:24.161100   94518 cli_runner.go:164] Run: docker volume create addons-493618 --label name.minikube.sigs.k8s.io=addons-493618 --label created_by.minikube.sigs.k8s.io=true
	I1018 14:15:24.178649   94518 oci.go:103] Successfully created a docker volume addons-493618
	I1018 14:15:24.178727   94518 cli_runner.go:164] Run: docker run --rm --name addons-493618-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-493618 --entrypoint /usr/bin/test -v addons-493618:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 14:15:30.677122   94518 cli_runner.go:217] Completed: docker run --rm --name addons-493618-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-493618 --entrypoint /usr/bin/test -v addons-493618:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (6.49835529s)
	I1018 14:15:30.677159   94518 oci.go:107] Successfully prepared a docker volume addons-493618
	I1018 14:15:30.677190   94518 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:15:30.677212   94518 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 14:15:30.677277   94518 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-493618:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 14:15:35.066928   94518 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-493618:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.389587346s)
	I1018 14:15:35.066965   94518 kic.go:203] duration metric: took 4.38974774s to extract preloaded images to volume ...
	W1018 14:15:35.067065   94518 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 14:15:35.067125   94518 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 14:15:35.067165   94518 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 14:15:35.125586   94518 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-493618 --name addons-493618 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-493618 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-493618 --network addons-493618 --ip 192.168.49.2 --volume addons-493618:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 14:15:35.438654   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Running}}
	I1018 14:15:35.457572   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:35.476400   94518 cli_runner.go:164] Run: docker exec addons-493618 stat /var/lib/dpkg/alternatives/iptables
	I1018 14:15:35.523494   94518 oci.go:144] the created container "addons-493618" has a running status.
	I1018 14:15:35.523536   94518 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa...
	I1018 14:15:35.628924   94518 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 14:15:35.654055   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:35.673745   94518 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 14:15:35.673769   94518 kic_runner.go:114] Args: [docker exec --privileged addons-493618 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 14:15:35.716664   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:35.738950   94518 machine.go:93] provisionDockerMachine start ...
	I1018 14:15:35.739054   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:35.761798   94518 main.go:141] libmachine: Using SSH client type: native
	I1018 14:15:35.762148   94518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 14:15:35.762167   94518 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 14:15:35.762887   94518 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38702->127.0.0.1:32768: read: connection reset by peer
	I1018 14:15:38.898415   94518 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-493618
	
	I1018 14:15:38.898444   94518 ubuntu.go:182] provisioning hostname "addons-493618"
	I1018 14:15:38.898497   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:38.915941   94518 main.go:141] libmachine: Using SSH client type: native
	I1018 14:15:38.916229   94518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 14:15:38.916247   94518 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-493618 && echo "addons-493618" | sudo tee /etc/hostname
	I1018 14:15:39.059322   94518 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-493618
	
	I1018 14:15:39.059403   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:39.077377   94518 main.go:141] libmachine: Using SSH client type: native
	I1018 14:15:39.077594   94518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 14:15:39.077611   94518 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-493618' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-493618/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-493618' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 14:15:39.210493   94518 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 14:15:39.210526   94518 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 14:15:39.210562   94518 ubuntu.go:190] setting up certificates
	I1018 14:15:39.210574   94518 provision.go:84] configureAuth start
	I1018 14:15:39.210640   94518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-493618
	I1018 14:15:39.227138   94518 provision.go:143] copyHostCerts
	I1018 14:15:39.227219   94518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 14:15:39.227331   94518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 14:15:39.227397   94518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 14:15:39.227463   94518 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.addons-493618 san=[127.0.0.1 192.168.49.2 addons-493618 localhost minikube]
	I1018 14:15:39.766960   94518 provision.go:177] copyRemoteCerts
	I1018 14:15:39.767023   94518 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 14:15:39.767059   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:39.785116   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:39.881305   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 14:15:39.900749   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 14:15:39.918059   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 14:15:39.936428   94518 provision.go:87] duration metric: took 725.836064ms to configureAuth
	I1018 14:15:39.936459   94518 ubuntu.go:206] setting minikube options for container-runtime
	I1018 14:15:39.936620   94518 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:15:39.936726   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:39.953814   94518 main.go:141] libmachine: Using SSH client type: native
	I1018 14:15:39.954104   94518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 14:15:39.954132   94518 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 14:15:40.197505   94518 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 14:15:40.197532   94518 machine.go:96] duration metric: took 4.458558157s to provisionDockerMachine
	I1018 14:15:40.197544   94518 client.go:171] duration metric: took 16.669289178s to LocalClient.Create
	I1018 14:15:40.197568   94518 start.go:167] duration metric: took 16.669361804s to libmachine.API.Create "addons-493618"
	I1018 14:15:40.197580   94518 start.go:293] postStartSetup for "addons-493618" (driver="docker")
	I1018 14:15:40.197594   94518 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 14:15:40.197676   94518 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 14:15:40.197732   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:40.214597   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:40.313123   94518 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 14:15:40.316613   94518 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 14:15:40.316636   94518 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 14:15:40.316649   94518 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/addons for local assets ...
	I1018 14:15:40.316713   94518 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/files for local assets ...
	I1018 14:15:40.316739   94518 start.go:296] duration metric: took 119.152647ms for postStartSetup
	I1018 14:15:40.317068   94518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-493618
	I1018 14:15:40.334170   94518 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/config.json ...
	I1018 14:15:40.334433   94518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 14:15:40.334480   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:40.351086   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:40.444185   94518 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 14:15:40.448983   94518 start.go:128] duration metric: took 16.923022705s to createHost
	I1018 14:15:40.449022   94518 start.go:83] releasing machines lock for "addons-493618", held for 16.923231309s
	I1018 14:15:40.449108   94518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-493618
	I1018 14:15:40.466240   94518 ssh_runner.go:195] Run: cat /version.json
	I1018 14:15:40.466278   94518 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 14:15:40.466315   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:40.466349   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:40.483258   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:40.484430   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:40.575602   94518 ssh_runner.go:195] Run: systemctl --version
	I1018 14:15:40.630562   94518 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 14:15:40.667185   94518 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 14:15:40.672266   94518 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 14:15:40.672342   94518 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 14:15:40.699256   94518 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 14:15:40.699280   94518 start.go:495] detecting cgroup driver to use...
	I1018 14:15:40.699309   94518 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 14:15:40.699382   94518 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 14:15:40.716022   94518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 14:15:40.728685   94518 docker.go:218] disabling cri-docker service (if available) ...
	I1018 14:15:40.728735   94518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 14:15:40.745467   94518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 14:15:40.763518   94518 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 14:15:40.852188   94518 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 14:15:40.941218   94518 docker.go:234] disabling docker service ...
	I1018 14:15:40.941291   94518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 14:15:40.960280   94518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 14:15:40.973519   94518 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 14:15:41.063896   94518 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 14:15:41.148959   94518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 14:15:41.161676   94518 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 14:15:41.176951   94518 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 14:15:41.177026   94518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.187952   94518 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 14:15:41.188013   94518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.197200   94518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.206326   94518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.215130   94518 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 14:15:41.223534   94518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.233043   94518 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.246975   94518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 14:15:41.256324   94518 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 14:15:41.263987   94518 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 14:15:41.264069   94518 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 14:15:41.276695   94518 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 14:15:41.284747   94518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:15:41.360872   94518 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 14:15:41.466951   94518 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 14:15:41.467031   94518 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 14:15:41.471440   94518 start.go:563] Will wait 60s for crictl version
	I1018 14:15:41.471517   94518 ssh_runner.go:195] Run: which crictl
	I1018 14:15:41.475466   94518 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 14:15:41.500862   94518 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 14:15:41.500988   94518 ssh_runner.go:195] Run: crio --version
	I1018 14:15:41.529363   94518 ssh_runner.go:195] Run: crio --version
	I1018 14:15:41.558832   94518 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 14:15:41.560098   94518 cli_runner.go:164] Run: docker network inspect addons-493618 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 14:15:41.577556   94518 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 14:15:41.581897   94518 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 14:15:41.592876   94518 kubeadm.go:883] updating cluster {Name:addons-493618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-493618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 14:15:41.593049   94518 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 14:15:41.593097   94518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 14:15:41.626577   94518 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 14:15:41.626599   94518 crio.go:433] Images already preloaded, skipping extraction
	I1018 14:15:41.626659   94518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 14:15:41.651828   94518 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 14:15:41.651853   94518 cache_images.go:85] Images are preloaded, skipping loading
	I1018 14:15:41.651862   94518 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 14:15:41.651985   94518 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-493618 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-493618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 14:15:41.652054   94518 ssh_runner.go:195] Run: crio config
	I1018 14:15:41.697070   94518 cni.go:84] Creating CNI manager for ""
	I1018 14:15:41.697097   94518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 14:15:41.697114   94518 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 14:15:41.697135   94518 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-493618 NodeName:addons-493618 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 14:15:41.697247   94518 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-493618"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 14:15:41.697307   94518 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 14:15:41.705749   94518 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 14:15:41.705816   94518 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 14:15:41.714036   94518 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 14:15:41.727518   94518 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 14:15:41.743540   94518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1018 14:15:41.757431   94518 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 14:15:41.761307   94518 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 14:15:41.771339   94518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:15:41.848842   94518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 14:15:41.872471   94518 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618 for IP: 192.168.49.2
	I1018 14:15:41.872502   94518 certs.go:195] generating shared ca certs ...
	I1018 14:15:41.872543   94518 certs.go:227] acquiring lock for ca certs: {Name:mk6d3b4e44856f498157352ffe8bb89d2c7a3998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:41.872726   94518 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key
	I1018 14:15:42.099521   94518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt ...
	I1018 14:15:42.099554   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt: {Name:mk29e474ac49378e3174669d30b699a0927d5939 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.099735   94518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key ...
	I1018 14:15:42.099748   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key: {Name:mk3df07768d76076523553d14b395d7aec695d8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.099827   94518 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key
	I1018 14:15:42.250081   94518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt ...
	I1018 14:15:42.250114   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt: {Name:mk9a000c7e66e15e6c70533a617d97af7b9526d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.250286   94518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key ...
	I1018 14:15:42.250299   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key: {Name:mked80e35481d07e9d2732a63324e9497996df0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.250389   94518 certs.go:257] generating profile certs ...
	I1018 14:15:42.250444   94518 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.key
	I1018 14:15:42.250458   94518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt with IP's: []
	I1018 14:15:42.310573   94518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt ...
	I1018 14:15:42.310609   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: {Name:mk817a96b6e7e4f2d967cd0f6b75836e15e32578 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.310772   94518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.key ...
	I1018 14:15:42.310783   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.key: {Name:mk2dc922e6933c9c6580f2453368c5810f4e481e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.310862   94518 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.key.4bff4883
	I1018 14:15:42.310880   94518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.crt.4bff4883 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 14:15:42.431608   94518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.crt.4bff4883 ...
	I1018 14:15:42.431643   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.crt.4bff4883: {Name:mkde2f0f0e05a8a44b434974d8b466c73645d4e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.431833   94518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.key.4bff4883 ...
	I1018 14:15:42.431850   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.key.4bff4883: {Name:mk6d2906da3206d1dab9c1811118ad12e5d1f944 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.431945   94518 certs.go:382] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.crt.4bff4883 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.crt
	I1018 14:15:42.432038   94518 certs.go:386] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.key.4bff4883 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.key
	I1018 14:15:42.432090   94518 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.key
	I1018 14:15:42.432109   94518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.crt with IP's: []
	I1018 14:15:42.629593   94518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.crt ...
	I1018 14:15:42.629624   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.crt: {Name:mkde5d9905c941564c933979fd5fade029103944 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.629812   94518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.key ...
	I1018 14:15:42.629826   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.key: {Name:mk36751e3ce77bf92cb13f27a98497c7ed9795bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:42.630014   94518 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 14:15:42.630049   94518 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem (1082 bytes)
	I1018 14:15:42.630071   94518 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem (1123 bytes)
	I1018 14:15:42.630096   94518 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem (1675 bytes)
	I1018 14:15:42.630764   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 14:15:42.650117   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 14:15:42.669226   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 14:15:42.690282   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 14:15:42.710069   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 14:15:42.728502   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 14:15:42.746298   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 14:15:42.764293   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 14:15:42.782203   94518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 14:15:42.801956   94518 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 14:15:42.814811   94518 ssh_runner.go:195] Run: openssl version
	I1018 14:15:42.821181   94518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 14:15:42.832594   94518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:15:42.836604   94518 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:15:42.836664   94518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 14:15:42.871729   94518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 14:15:42.881086   94518 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 14:15:42.884965   94518 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 14:15:42.885020   94518 kubeadm.go:400] StartCluster: {Name:addons-493618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-493618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:15:42.885113   94518 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:15:42.885177   94518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:15:42.913223   94518 cri.go:89] found id: ""
	I1018 14:15:42.913289   94518 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 14:15:42.921815   94518 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 14:15:42.930869   94518 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 14:15:42.930952   94518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 14:15:42.939927   94518 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 14:15:42.939956   94518 kubeadm.go:157] found existing configuration files:
	
	I1018 14:15:42.940012   94518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 14:15:42.948083   94518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 14:15:42.948160   94518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 14:15:42.955881   94518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 14:15:42.963517   94518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 14:15:42.963574   94518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 14:15:42.971090   94518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 14:15:42.979262   94518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 14:15:42.979341   94518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 14:15:42.986704   94518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 14:15:42.994650   94518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 14:15:42.994702   94518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 14:15:43.002430   94518 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 14:15:43.040520   94518 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 14:15:43.040577   94518 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 14:15:43.062959   94518 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 14:15:43.063081   94518 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 14:15:43.063146   94518 kubeadm.go:318] OS: Linux
	I1018 14:15:43.063197   94518 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 14:15:43.063262   94518 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 14:15:43.063319   94518 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 14:15:43.063359   94518 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 14:15:43.063397   94518 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 14:15:43.063445   94518 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 14:15:43.063497   94518 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 14:15:43.063534   94518 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 14:15:43.122707   94518 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 14:15:43.122870   94518 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 14:15:43.123048   94518 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 14:15:43.130408   94518 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 14:15:43.132493   94518 out.go:252]   - Generating certificates and keys ...
	I1018 14:15:43.132580   94518 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 14:15:43.132638   94518 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 14:15:43.195493   94518 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 14:15:43.335589   94518 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 14:15:43.540635   94518 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 14:15:43.653902   94518 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 14:15:43.807694   94518 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 14:15:43.807847   94518 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-493618 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 14:15:43.853102   94518 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 14:15:43.853283   94518 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-493618 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 14:15:43.971707   94518 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 14:15:44.039605   94518 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 14:15:44.636757   94518 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 14:15:44.636886   94518 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 14:15:45.211213   94518 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 14:15:45.796318   94518 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 14:15:45.822982   94518 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 14:15:46.106180   94518 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 14:15:46.239037   94518 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 14:15:46.239513   94518 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 14:15:46.243151   94518 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 14:15:46.244760   94518 out.go:252]   - Booting up control plane ...
	I1018 14:15:46.244874   94518 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 14:15:46.244990   94518 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 14:15:46.245625   94518 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 14:15:46.260250   94518 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 14:15:46.260360   94518 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 14:15:46.267696   94518 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 14:15:46.267817   94518 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 14:15:46.267866   94518 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 14:15:46.370744   94518 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 14:15:46.370865   94518 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 14:15:47.371649   94518 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000990385s
	I1018 14:15:47.376256   94518 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 14:15:47.376432   94518 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 14:15:47.376566   94518 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 14:15:47.376709   94518 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 14:15:49.135751   94518 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.759510931s
	I1018 14:15:49.255604   94518 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.879264109s
	I1018 14:15:50.878424   94518 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.502192934s
	I1018 14:15:50.890048   94518 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 14:15:50.901423   94518 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 14:15:50.910227   94518 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 14:15:50.910432   94518 kubeadm.go:318] [mark-control-plane] Marking the node addons-493618 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 14:15:50.918188   94518 kubeadm.go:318] [bootstrap-token] Using token: 2jy7nx.1zs0hlvym10ojzfo
	I1018 14:15:50.919589   94518 out.go:252]   - Configuring RBAC rules ...
	I1018 14:15:50.919736   94518 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 14:15:50.923222   94518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 14:15:50.928452   94518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 14:15:50.931223   94518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 14:15:50.933641   94518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 14:15:50.937165   94518 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 14:15:51.285114   94518 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 14:15:51.702798   94518 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 14:15:52.284201   94518 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 14:15:52.285014   94518 kubeadm.go:318] 
	I1018 14:15:52.285123   94518 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 14:15:52.285134   94518 kubeadm.go:318] 
	I1018 14:15:52.285253   94518 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 14:15:52.285261   94518 kubeadm.go:318] 
	I1018 14:15:52.285297   94518 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 14:15:52.285409   94518 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 14:15:52.285497   94518 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 14:15:52.285507   94518 kubeadm.go:318] 
	I1018 14:15:52.285594   94518 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 14:15:52.285604   94518 kubeadm.go:318] 
	I1018 14:15:52.285673   94518 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 14:15:52.285694   94518 kubeadm.go:318] 
	I1018 14:15:52.285777   94518 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 14:15:52.285856   94518 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 14:15:52.285945   94518 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 14:15:52.285954   94518 kubeadm.go:318] 
	I1018 14:15:52.286046   94518 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 14:15:52.286158   94518 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 14:15:52.286173   94518 kubeadm.go:318] 
	I1018 14:15:52.286260   94518 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 2jy7nx.1zs0hlvym10ojzfo \
	I1018 14:15:52.286412   94518 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9845daa2461a34dd973607f8353b114051f1b03075ff9500a74fb21865e04329 \
	I1018 14:15:52.286450   94518 kubeadm.go:318] 	--control-plane 
	I1018 14:15:52.286458   94518 kubeadm.go:318] 
	I1018 14:15:52.286553   94518 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 14:15:52.286561   94518 kubeadm.go:318] 
	I1018 14:15:52.286655   94518 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 2jy7nx.1zs0hlvym10ojzfo \
	I1018 14:15:52.286798   94518 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9845daa2461a34dd973607f8353b114051f1b03075ff9500a74fb21865e04329 
	I1018 14:15:52.288880   94518 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 14:15:52.289078   94518 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 14:15:52.289109   94518 cni.go:84] Creating CNI manager for ""
	I1018 14:15:52.289123   94518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 14:15:52.290888   94518 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 14:15:52.292177   94518 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 14:15:52.296572   94518 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 14:15:52.296594   94518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 14:15:52.309832   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 14:15:52.517329   94518 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 14:15:52.517424   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:52.517457   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-493618 minikube.k8s.io/updated_at=2025_10_18T14_15_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=addons-493618 minikube.k8s.io/primary=true
	I1018 14:15:52.601850   94518 ops.go:34] apiserver oom_adj: -16
	I1018 14:15:52.601988   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:53.102345   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:53.602765   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:54.102512   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:54.602301   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:55.102326   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:55.602077   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:56.102665   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:56.602275   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:57.102902   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:57.602898   94518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 14:15:57.666050   94518 kubeadm.go:1113] duration metric: took 5.148697107s to wait for elevateKubeSystemPrivileges
	I1018 14:15:57.666085   94518 kubeadm.go:402] duration metric: took 14.781070154s to StartCluster
	I1018 14:15:57.666113   94518 settings.go:142] acquiring lock: {Name:mk9d9d72167811b3554435e137ed17137bf54a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:57.666241   94518 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 14:15:57.666666   94518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/kubeconfig: {Name:mkdc6fbc9be8e57447490b30dd5bb0086611876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:57.666904   94518 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 14:15:57.666964   94518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 14:15:57.667023   94518 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 14:15:57.667176   94518 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:15:57.667191   94518 addons.go:69] Setting ingress-dns=true in profile "addons-493618"
	I1018 14:15:57.667213   94518 addons.go:238] Setting addon ingress-dns=true in "addons-493618"
	I1018 14:15:57.667219   94518 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-493618"
	I1018 14:15:57.667224   94518 addons.go:69] Setting cloud-spanner=true in profile "addons-493618"
	I1018 14:15:57.667225   94518 addons.go:69] Setting yakd=true in profile "addons-493618"
	I1018 14:15:57.667237   94518 addons.go:238] Setting addon cloud-spanner=true in "addons-493618"
	I1018 14:15:57.667243   94518 addons.go:238] Setting addon yakd=true in "addons-493618"
	I1018 14:15:57.667261   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667270   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667306   94518 addons.go:69] Setting registry-creds=true in profile "addons-493618"
	I1018 14:15:57.667319   94518 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-493618"
	I1018 14:15:57.667325   94518 addons.go:238] Setting addon registry-creds=true in "addons-493618"
	I1018 14:15:57.667340   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667333   94518 addons.go:69] Setting ingress=true in profile "addons-493618"
	I1018 14:15:57.667353   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667362   94518 addons.go:238] Setting addon ingress=true in "addons-493618"
	I1018 14:15:57.667347   94518 addons.go:69] Setting gcp-auth=true in profile "addons-493618"
	I1018 14:15:57.667379   94518 addons.go:69] Setting inspektor-gadget=true in profile "addons-493618"
	I1018 14:15:57.667395   94518 addons.go:238] Setting addon inspektor-gadget=true in "addons-493618"
	I1018 14:15:57.667413   94518 mustload.go:65] Loading cluster: addons-493618
	I1018 14:15:57.667421   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667425   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667659   94518 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:15:57.667849   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667856   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667873   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667881   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667885   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667927   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667957   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.667977   94518 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-493618"
	I1018 14:15:57.667997   94518 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-493618"
	I1018 14:15:57.668260   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.668538   94518 addons.go:69] Setting volcano=true in profile "addons-493618"
	I1018 14:15:57.668558   94518 addons.go:238] Setting addon volcano=true in "addons-493618"
	I1018 14:15:57.668585   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.668706   94518 addons.go:69] Setting default-storageclass=true in profile "addons-493618"
	I1018 14:15:57.668731   94518 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-493618"
	I1018 14:15:57.668892   94518 addons.go:69] Setting volumesnapshots=true in profile "addons-493618"
	I1018 14:15:57.668932   94518 addons.go:238] Setting addon volumesnapshots=true in "addons-493618"
	I1018 14:15:57.668964   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.669072   94518 addons.go:69] Setting storage-provisioner=true in profile "addons-493618"
	I1018 14:15:57.669100   94518 addons.go:238] Setting addon storage-provisioner=true in "addons-493618"
	I1018 14:15:57.669121   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.669367   94518 out.go:179] * Verifying Kubernetes components...
	I1018 14:15:57.667211   94518 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-493618"
	I1018 14:15:57.669415   94518 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-493618"
	I1018 14:15:57.669445   94518 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-493618"
	I1018 14:15:57.669466   94518 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-493618"
	I1018 14:15:57.669478   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.669495   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.669783   94518 addons.go:69] Setting registry=true in profile "addons-493618"
	I1018 14:15:57.669803   94518 addons.go:238] Setting addon registry=true in "addons-493618"
	I1018 14:15:57.669828   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667372   94518 addons.go:69] Setting metrics-server=true in profile "addons-493618"
	I1018 14:15:57.670134   94518 addons.go:238] Setting addon metrics-server=true in "addons-493618"
	I1018 14:15:57.670161   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.667262   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.671078   94518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 14:15:57.677610   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.677633   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.678278   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.678433   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.680282   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.683274   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.686374   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.687318   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.687981   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.726980   94518 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 14:15:57.727164   94518 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 14:15:57.728296   94518 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 14:15:57.728322   94518 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 14:15:57.728394   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.731709   94518 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 14:15:57.735505   94518 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 14:15:57.735529   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 14:15:57.735623   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.744401   94518 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 14:15:57.746166   94518 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 14:15:57.746193   94518 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 14:15:57.746276   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.753364   94518 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-493618"
	I1018 14:15:57.753422   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.753977   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.757779   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.760961   94518 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 14:15:57.761050   94518 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 14:15:57.761128   94518 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 14:15:57.765412   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 14:15:57.765469   94518 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 14:15:57.765570   94518 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 14:15:57.765575   94518 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 14:15:57.765590   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 14:15:57.765649   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.765678   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.773672   94518 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 14:15:57.782459   94518 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 14:15:57.782523   94518 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 14:15:57.782594   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.782951   94518 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:15:57.783453   94518 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 14:15:57.783474   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 14:15:57.784494   94518 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 14:15:57.785442   94518 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 14:15:57.785814   94518 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:15:57.785850   94518 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 14:15:57.785866   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 14:15:57.785946   94518 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 14:15:57.786008   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.786341   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.795904   94518 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 14:15:57.795986   94518 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 14:15:57.796002   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 14:15:57.796075   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.797016   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 14:15:57.797107   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.797727   94518 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 14:15:57.797746   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 14:15:57.797798   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	W1018 14:15:57.799421   94518 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 14:15:57.802268   94518 addons.go:238] Setting addon default-storageclass=true in "addons-493618"
	I1018 14:15:57.802319   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:15:57.802790   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:15:57.803968   94518 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 14:15:57.806759   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.806881   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 14:15:57.807070   94518 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 14:15:57.807097   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 14:15:57.807159   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.809404   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 14:15:57.810905   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 14:15:57.812585   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 14:15:57.814158   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 14:15:57.817562   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 14:15:57.818954   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 14:15:57.820159   94518 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 14:15:57.821469   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.822222   94518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 14:15:57.822661   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.825309   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 14:15:57.825341   94518 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 14:15:57.825404   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.843406   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.845448   94518 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 14:15:57.846549   94518 out.go:179]   - Using image docker.io/busybox:stable
	I1018 14:15:57.847761   94518 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 14:15:57.847936   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 14:15:57.848446   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.848859   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.862892   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.865577   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.865604   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.867128   94518 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 14:15:57.867148   94518 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 14:15:57.867202   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:15:57.870311   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.875963   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.876057   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.878232   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	W1018 14:15:57.891707   94518 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 14:15:57.891829   94518 retry.go:31] will retry after 359.382679ms: ssh: handshake failed: EOF
	I1018 14:15:57.896432   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.907502   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.909844   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:15:57.912211   94518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 14:15:57.988091   94518 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 14:15:57.988173   94518 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 14:15:57.997450   94518 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 14:15:57.997478   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 14:15:58.003508   94518 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 14:15:58.003538   94518 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 14:15:58.006239   94518 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 14:15:58.006263   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 14:15:58.015848   94518 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 14:15:58.015893   94518 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 14:15:58.020396   94518 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 14:15:58.020421   94518 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 14:15:58.024488   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 14:15:58.035697   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 14:15:58.035896   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 14:15:58.038172   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 14:15:58.041347   94518 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 14:15:58.041371   94518 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 14:15:58.049321   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 14:15:58.050245   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 14:15:58.052160   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 14:15:58.061988   94518 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:15:58.062019   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 14:15:58.069226   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 14:15:58.070543   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 14:15:58.074239   94518 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 14:15:58.074279   94518 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 14:15:58.079168   94518 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 14:15:58.079198   94518 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 14:15:58.092132   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:15:58.096100   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 14:15:58.102856   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 14:15:58.122432   94518 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 14:15:58.122460   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 14:15:58.133719   94518 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 14:15:58.133827   94518 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 14:15:58.178253   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 14:15:58.201737   94518 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 14:15:58.201955   94518 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 14:15:58.250630   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 14:15:58.250660   94518 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 14:15:58.257881   94518 node_ready.go:35] waiting up to 6m0s for node "addons-493618" to be "Ready" ...
	I1018 14:15:58.259987   94518 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 14:15:58.305869   94518 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 14:15:58.305892   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 14:15:58.372074   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 14:15:58.495259   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 14:15:58.495413   94518 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 14:15:58.542356   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 14:15:58.542459   94518 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 14:15:58.574546   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 14:15:58.574578   94518 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 14:15:58.610004   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 14:15:58.610119   94518 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 14:15:58.650707   94518 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 14:15:58.650741   94518 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 14:15:58.689762   94518 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 14:15:58.689866   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 14:15:58.728580   94518 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 14:15:58.728663   94518 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 14:15:58.777291   94518 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 14:15:58.777320   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 14:15:58.779077   94518 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-493618" context rescaled to 1 replicas
	I1018 14:15:58.793294   94518 addons.go:479] Verifying addon registry=true in "addons-493618"
	I1018 14:15:58.795632   94518 out.go:179] * Verifying registry addon...
	I1018 14:15:58.797513   94518 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 14:15:58.802260   94518 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 14:15:58.802346   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 14:15:58.819478   94518 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 14:15:58.819580   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:15:58.840463   94518 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 14:15:58.840559   94518 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 14:15:58.884762   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 14:15:59.253579   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.21783586s)
	I1018 14:15:59.253646   94518 addons.go:479] Verifying addon ingress=true in "addons-493618"
	I1018 14:15:59.253649   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.217694877s)
	I1018 14:15:59.253724   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.215515463s)
	I1018 14:15:59.253830   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.204458532s)
	I1018 14:15:59.253862   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.203589981s)
	I1018 14:15:59.253978   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.184732075s)
	I1018 14:15:59.253955   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.201771193s)
	I1018 14:15:59.254125   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183543894s)
	I1018 14:15:59.254146   94518 addons.go:479] Verifying addon metrics-server=true in "addons-493618"
	I1018 14:15:59.254259   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.162092007s)
	I1018 14:15:59.254308   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.151427901s)
	W1018 14:15:59.254331   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:15:59.254361   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.076083963s)
	I1018 14:15:59.254360   94518 retry.go:31] will retry after 263.001722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:15:59.254285   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.158155774s)
	I1018 14:15:59.255381   94518 out.go:179] * Verifying ingress addon...
	I1018 14:15:59.256267   94518 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-493618 service yakd-dashboard -n yakd-dashboard
	
	I1018 14:15:59.258528   94518 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 14:15:59.262829   94518 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 14:15:59.262849   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 14:15:59.262881   94518 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1018 14:15:59.362679   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:15:59.517934   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:15:59.762348   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:15:59.767796   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.395673176s)
	W1018 14:15:59.767854   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 14:15:59.767878   94518 retry.go:31] will retry after 185.211057ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 14:15:59.768052   94518 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-493618"
	I1018 14:15:59.770042   94518 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 14:15:59.772172   94518 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 14:15:59.775895   94518 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 14:15:59.775932   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:15:59.862807   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:15:59.953296   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1018 14:16:00.179866   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:00.179932   94518 retry.go:31] will retry after 259.138229ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 14:16:00.261895   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:00.262066   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:00.276175   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:00.300887   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:00.439689   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:00.762081   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:00.862741   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:00.862953   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:01.262222   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:01.275838   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:01.300734   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:01.762110   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:01.862891   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:01.863056   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:02.261689   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:02.275594   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:02.300586   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:02.456467   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.50311722s)
	I1018 14:16:02.456599   94518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.016876084s)
	W1018 14:16:02.456633   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:02.456657   94518 retry.go:31] will retry after 555.919598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 14:16:02.761271   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:02.761679   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:02.862629   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:02.862696   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:03.013466   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:03.261821   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:03.275574   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:03.301416   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 14:16:03.558757   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:03.558796   94518 retry.go:31] will retry after 725.766019ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:03.761660   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:03.862928   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:03.862971   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:04.262257   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:04.275978   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:04.285123   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:04.301354   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:04.762331   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 14:16:04.844992   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:04.845023   94518 retry.go:31] will retry after 1.701988941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:04.862778   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:04.862875   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 14:16:05.261697   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:05.262238   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:05.275990   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:05.300734   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:05.366047   94518 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 14:16:05.366115   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:16:05.383978   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:16:05.493818   94518 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 14:16:05.506797   94518 addons.go:238] Setting addon gcp-auth=true in "addons-493618"
	I1018 14:16:05.506861   94518 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:16:05.507286   94518 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:16:05.523892   94518 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 14:16:05.523968   94518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:16:05.541453   94518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:16:05.636326   94518 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 14:16:05.637653   94518 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 14:16:05.638692   94518 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 14:16:05.638712   94518 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 14:16:05.652837   94518 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 14:16:05.652861   94518 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 14:16:05.666299   94518 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 14:16:05.666320   94518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 14:16:05.680085   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 14:16:05.761566   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:05.775505   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:05.801315   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:05.994641   94518 addons.go:479] Verifying addon gcp-auth=true in "addons-493618"
	I1018 14:16:05.996092   94518 out.go:179] * Verifying gcp-auth addon...
	I1018 14:16:05.998105   94518 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 14:16:06.000784   94518 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 14:16:06.000799   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:06.261679   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:06.275363   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:06.301313   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:06.501300   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:06.547370   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:06.762544   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:06.775122   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:06.801020   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:07.001387   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:07.102721   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:07.102751   94518 retry.go:31] will retry after 1.894325627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 14:16:07.261769   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:07.261821   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:07.275476   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:07.301602   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:07.501354   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:07.761681   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:07.775315   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:07.801142   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:08.000985   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:08.261664   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:08.275376   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:08.301438   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:08.501200   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:08.762331   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:08.779339   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:08.801735   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:08.997988   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:09.001098   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:09.261663   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:09.275805   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:09.300898   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:09.500718   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:09.549206   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:09.549247   94518 retry.go:31] will retry after 3.310963502s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:09.761098   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 14:16:09.761118   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:09.776183   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:09.800955   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:10.002285   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:10.261461   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:10.275203   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:10.300857   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:10.501789   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:10.762046   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:10.775575   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:10.801657   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:11.001278   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:11.261449   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:11.275212   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:11.301160   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:11.500880   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:11.761928   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:11.775663   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:11.800279   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:12.001764   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:12.261645   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:12.261934   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:12.275426   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:12.301237   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:12.501106   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:12.762500   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:12.775341   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:12.801069   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:12.861213   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:13.001726   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:13.261985   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:13.275741   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:13.300410   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 14:16:13.412655   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:13.412687   94518 retry.go:31] will retry after 2.146003967s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:13.501415   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:13.761464   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:13.775396   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:13.801074   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:14.001649   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:14.261663   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:14.275331   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:14.301036   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:14.500895   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:14.760721   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:14.762189   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:14.775457   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:14.801062   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:15.001069   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:15.261905   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:15.275163   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:15.300759   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:15.501790   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:15.558849   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:15.761297   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:15.775871   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:15.800389   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:16.001291   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:16.114482   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:16.114511   94518 retry.go:31] will retry after 5.173996473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:16.261692   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:16.275397   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:16.301389   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:16.500980   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:16.760795   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:16.762022   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:16.775519   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:16.801313   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:17.000944   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:17.261757   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:17.275325   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:17.300931   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:17.502121   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:17.761220   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:17.775796   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:17.800763   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:18.001822   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:18.261706   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:18.275401   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:18.301218   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:18.500894   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:18.761938   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:18.775652   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:18.800266   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:19.001007   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:19.261023   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:19.261757   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:19.275393   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:19.301127   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:19.500951   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:19.761787   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:19.775216   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:19.800787   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:20.001688   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:20.261951   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:20.275392   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:20.301151   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:20.501366   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:20.761599   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:20.776707   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:20.800198   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:21.001395   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:21.261329   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 14:16:21.261409   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:21.275153   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:21.289245   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:21.300476   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:21.501513   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:21.761123   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:21.775774   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:21.800635   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 14:16:21.851749   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:21.851778   94518 retry.go:31] will retry after 9.714380288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:22.001747   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:22.261813   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:22.275852   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:22.300396   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:22.501345   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:22.761740   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:22.775460   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:22.801088   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:23.000938   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:23.261494   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:23.275351   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:23.301186   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:23.501437   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:23.761231   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:23.761277   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:23.776153   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:23.800929   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:24.001798   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:24.261566   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:24.275231   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:24.300782   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:24.501826   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:24.761655   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:24.775311   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:24.801269   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:25.001202   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:25.261268   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:25.276037   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:25.300709   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:25.501743   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:25.761717   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:25.761933   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:25.775270   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:25.800968   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:26.001027   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:26.261514   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:26.275058   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:26.300235   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:26.500857   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:26.761281   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:26.775331   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:26.801253   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:27.001003   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:27.261650   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:27.275357   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:27.301224   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:27.501285   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:27.761635   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:27.775243   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:27.801161   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:28.001260   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:28.261155   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:28.261172   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:28.276267   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:28.300992   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:28.501784   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:28.761766   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:28.775549   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:28.801180   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:29.001049   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:29.261993   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:29.275515   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:29.301883   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:29.501469   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:29.761634   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:29.775146   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:29.801064   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:30.001967   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:30.261684   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:30.275382   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:30.301634   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:30.501572   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:30.761473   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:30.762048   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:30.775275   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:30.800997   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:31.000979   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:31.261897   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:31.275984   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:31.300628   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:31.501831   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:31.566932   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:31.761417   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:31.774979   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:31.800622   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:32.001291   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:32.118968   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:32.119002   94518 retry.go:31] will retry after 19.360841038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:32.261895   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:32.275779   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:32.304391   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:32.501587   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:32.761735   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:32.761898   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:32.775323   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:32.801370   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:33.001609   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:33.261584   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:33.275443   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:33.301126   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:33.501842   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:33.761935   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:33.774859   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:33.800261   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:34.001159   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:34.261227   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:34.275683   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:34.301293   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:34.501219   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:34.761634   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:34.775251   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:34.801016   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:35.002045   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:35.261059   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:35.262099   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:35.275492   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:35.301345   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:35.501646   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:35.761690   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:35.775306   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:35.800935   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:36.001009   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:36.261734   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:36.275232   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:36.300862   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:36.502157   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:36.761205   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:36.776410   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:36.801109   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:37.001783   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:37.261689   94518 node_ready.go:57] node "addons-493618" has "Ready":"False" status (will retry)
	I1018 14:16:37.261744   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:37.275555   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:37.301669   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:37.501215   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:37.762014   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:37.775442   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:37.801110   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:38.000880   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:38.263391   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:38.275251   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:38.301068   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:38.501978   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:38.760157   94518 node_ready.go:49] node "addons-493618" is "Ready"
	I1018 14:16:38.760187   94518 node_ready.go:38] duration metric: took 40.502258296s for node "addons-493618" to be "Ready" ...
	I1018 14:16:38.760202   94518 api_server.go:52] waiting for apiserver process to appear ...
	I1018 14:16:38.760256   94518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 14:16:38.761614   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:38.775477   94518 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 14:16:38.775499   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:38.778619   94518 api_server.go:72] duration metric: took 41.111664217s to wait for apiserver process to appear ...
	I1018 14:16:38.778646   94518 api_server.go:88] waiting for apiserver healthz status ...
	I1018 14:16:38.778670   94518 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 14:16:38.782820   94518 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 14:16:38.783979   94518 api_server.go:141] control plane version: v1.34.1
	I1018 14:16:38.784055   94518 api_server.go:131] duration metric: took 5.400033ms to wait for apiserver health ...
	I1018 14:16:38.784069   94518 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 14:16:38.790511   94518 system_pods.go:59] 20 kube-system pods found
	I1018 14:16:38.790555   94518 system_pods.go:61] "amd-gpu-device-plugin-ps8fn" [212e7c35-e96b-474d-a839-a1234082db1e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:16:38.790566   94518 system_pods.go:61] "coredns-66bc5c9577-zsv4k" [f2e200e1-f869-49c9-9964-a7ce8b78fc36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:16:38.790574   94518 system_pods.go:61] "csi-hostpath-attacher-0" [3623b39a-4edb-4205-aeef-6f143a1226ca] Pending
	I1018 14:16:38.790580   94518 system_pods.go:61] "csi-hostpath-resizer-0" [d1f8fabd-93ae-47ef-9455-9609361e348d] Pending
	I1018 14:16:38.790589   94518 system_pods.go:61] "csi-hostpathplugin-t8ksl" [ed3177ab-2b66-47a9-8f89-40db56cbc332] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 14:16:38.790595   94518 system_pods.go:61] "etcd-addons-493618" [b9b0c453-cead-48a5-916f-b6743b403b4d] Running
	I1018 14:16:38.790602   94518 system_pods.go:61] "kindnet-vhk9j" [f447a047-b1f9-4b54-8f86-2bceeda6e6f2] Running
	I1018 14:16:38.790608   94518 system_pods.go:61] "kube-apiserver-addons-493618" [75f188ee-c145-4c6c-8fe4-ec8c6c92a91c] Running
	I1018 14:16:38.790613   94518 system_pods.go:61] "kube-controller-manager-addons-493618" [18b03cc0-9988-4bdf-b7a2-1c4c0a50ea7c] Running
	I1018 14:16:38.790621   94518 system_pods.go:61] "kube-ingress-dns-minikube" [d97bcf56-e215-486d-bd23-84d300a41f66] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:16:38.790626   94518 system_pods.go:61] "kube-proxy-5x2v2" [43716c06-fdd2-4f68-87f9-d90cf2f16440] Running
	I1018 14:16:38.790631   94518 system_pods.go:61] "kube-scheduler-addons-493618" [dc3fc718-9168-4264-aa9e-4ce985ac1d72] Running
	I1018 14:16:38.790638   94518 system_pods.go:61] "metrics-server-85b7d694d7-hzzlq" [93c9f378-d616-4060-a537-9060c4ce996a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:16:38.790647   94518 system_pods.go:61] "nvidia-device-plugin-daemonset-w9ks6" [0837d0f2-cef9-4ff6-b233-2547cb3c5f55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:16:38.790655   94518 system_pods.go:61] "registry-6b586f9694-pdjc2" [d5f69ebc-b615-4334-8fe1-593c6fe7f496] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:16:38.790665   94518 system_pods.go:61] "registry-creds-764b6fb674-czp24" [a3c3218a-127e-4d0d-90f6-a2b735fc7c5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:16:38.790681   94518 system_pods.go:61] "registry-proxy-dddz6" [55ecd78e-f023-433a-80aa-4a98aed74734] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:16:38.790688   94518 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8ftdc" [41ec4b8b-e0f1-4aed-a826-e4d50c52e35d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:38.790699   94518 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fcm6w" [fcafa25f-dac8-4c6c-9dee-6c155ff5f214] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:38.790706   94518 system_pods.go:61] "storage-provisioner" [0ce9ef3e-e1d3-4979-b307-1d30a38cfc5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:16:38.790714   94518 system_pods.go:74] duration metric: took 6.637048ms to wait for pod list to return data ...
	I1018 14:16:38.790727   94518 default_sa.go:34] waiting for default service account to be created ...
	I1018 14:16:38.813945   94518 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 14:16:38.813976   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:38.817277   94518 default_sa.go:45] found service account: "default"
	I1018 14:16:38.817303   94518 default_sa.go:55] duration metric: took 26.568684ms for default service account to be created ...
	I1018 14:16:38.817314   94518 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 14:16:38.836792   94518 system_pods.go:86] 20 kube-system pods found
	I1018 14:16:38.836840   94518 system_pods.go:89] "amd-gpu-device-plugin-ps8fn" [212e7c35-e96b-474d-a839-a1234082db1e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:16:38.836858   94518 system_pods.go:89] "coredns-66bc5c9577-zsv4k" [f2e200e1-f869-49c9-9964-a7ce8b78fc36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:16:38.836867   94518 system_pods.go:89] "csi-hostpath-attacher-0" [3623b39a-4edb-4205-aeef-6f143a1226ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 14:16:38.836875   94518 system_pods.go:89] "csi-hostpath-resizer-0" [d1f8fabd-93ae-47ef-9455-9609361e348d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 14:16:38.836883   94518 system_pods.go:89] "csi-hostpathplugin-t8ksl" [ed3177ab-2b66-47a9-8f89-40db56cbc332] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 14:16:38.836890   94518 system_pods.go:89] "etcd-addons-493618" [b9b0c453-cead-48a5-916f-b6743b403b4d] Running
	I1018 14:16:38.836900   94518 system_pods.go:89] "kindnet-vhk9j" [f447a047-b1f9-4b54-8f86-2bceeda6e6f2] Running
	I1018 14:16:38.836907   94518 system_pods.go:89] "kube-apiserver-addons-493618" [75f188ee-c145-4c6c-8fe4-ec8c6c92a91c] Running
	I1018 14:16:38.836927   94518 system_pods.go:89] "kube-controller-manager-addons-493618" [18b03cc0-9988-4bdf-b7a2-1c4c0a50ea7c] Running
	I1018 14:16:38.836935   94518 system_pods.go:89] "kube-ingress-dns-minikube" [d97bcf56-e215-486d-bd23-84d300a41f66] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:16:38.836944   94518 system_pods.go:89] "kube-proxy-5x2v2" [43716c06-fdd2-4f68-87f9-d90cf2f16440] Running
	I1018 14:16:38.836951   94518 system_pods.go:89] "kube-scheduler-addons-493618" [dc3fc718-9168-4264-aa9e-4ce985ac1d72] Running
	I1018 14:16:38.836958   94518 system_pods.go:89] "metrics-server-85b7d694d7-hzzlq" [93c9f378-d616-4060-a537-9060c4ce996a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:16:38.836970   94518 system_pods.go:89] "nvidia-device-plugin-daemonset-w9ks6" [0837d0f2-cef9-4ff6-b233-2547cb3c5f55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:16:38.836985   94518 system_pods.go:89] "registry-6b586f9694-pdjc2" [d5f69ebc-b615-4334-8fe1-593c6fe7f496] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:16:38.836997   94518 system_pods.go:89] "registry-creds-764b6fb674-czp24" [a3c3218a-127e-4d0d-90f6-a2b735fc7c5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:16:38.837005   94518 system_pods.go:89] "registry-proxy-dddz6" [55ecd78e-f023-433a-80aa-4a98aed74734] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:16:38.837016   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8ftdc" [41ec4b8b-e0f1-4aed-a826-e4d50c52e35d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:38.837026   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fcm6w" [fcafa25f-dac8-4c6c-9dee-6c155ff5f214] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:38.837036   94518 system_pods.go:89] "storage-provisioner" [0ce9ef3e-e1d3-4979-b307-1d30a38cfc5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:16:38.837060   94518 retry.go:31] will retry after 303.187947ms: missing components: kube-dns
	I1018 14:16:39.002953   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:39.146165   94518 system_pods.go:86] 20 kube-system pods found
	I1018 14:16:39.146209   94518 system_pods.go:89] "amd-gpu-device-plugin-ps8fn" [212e7c35-e96b-474d-a839-a1234082db1e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:16:39.146220   94518 system_pods.go:89] "coredns-66bc5c9577-zsv4k" [f2e200e1-f869-49c9-9964-a7ce8b78fc36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:16:39.146229   94518 system_pods.go:89] "csi-hostpath-attacher-0" [3623b39a-4edb-4205-aeef-6f143a1226ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 14:16:39.146237   94518 system_pods.go:89] "csi-hostpath-resizer-0" [d1f8fabd-93ae-47ef-9455-9609361e348d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 14:16:39.146245   94518 system_pods.go:89] "csi-hostpathplugin-t8ksl" [ed3177ab-2b66-47a9-8f89-40db56cbc332] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 14:16:39.146251   94518 system_pods.go:89] "etcd-addons-493618" [b9b0c453-cead-48a5-916f-b6743b403b4d] Running
	I1018 14:16:39.146257   94518 system_pods.go:89] "kindnet-vhk9j" [f447a047-b1f9-4b54-8f86-2bceeda6e6f2] Running
	I1018 14:16:39.146264   94518 system_pods.go:89] "kube-apiserver-addons-493618" [75f188ee-c145-4c6c-8fe4-ec8c6c92a91c] Running
	I1018 14:16:39.146270   94518 system_pods.go:89] "kube-controller-manager-addons-493618" [18b03cc0-9988-4bdf-b7a2-1c4c0a50ea7c] Running
	I1018 14:16:39.146285   94518 system_pods.go:89] "kube-ingress-dns-minikube" [d97bcf56-e215-486d-bd23-84d300a41f66] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:16:39.146293   94518 system_pods.go:89] "kube-proxy-5x2v2" [43716c06-fdd2-4f68-87f9-d90cf2f16440] Running
	I1018 14:16:39.146299   94518 system_pods.go:89] "kube-scheduler-addons-493618" [dc3fc718-9168-4264-aa9e-4ce985ac1d72] Running
	I1018 14:16:39.146311   94518 system_pods.go:89] "metrics-server-85b7d694d7-hzzlq" [93c9f378-d616-4060-a537-9060c4ce996a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:16:39.146320   94518 system_pods.go:89] "nvidia-device-plugin-daemonset-w9ks6" [0837d0f2-cef9-4ff6-b233-2547cb3c5f55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:16:39.146329   94518 system_pods.go:89] "registry-6b586f9694-pdjc2" [d5f69ebc-b615-4334-8fe1-593c6fe7f496] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:16:39.146342   94518 system_pods.go:89] "registry-creds-764b6fb674-czp24" [a3c3218a-127e-4d0d-90f6-a2b735fc7c5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:16:39.146354   94518 system_pods.go:89] "registry-proxy-dddz6" [55ecd78e-f023-433a-80aa-4a98aed74734] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:16:39.146362   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8ftdc" [41ec4b8b-e0f1-4aed-a826-e4d50c52e35d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:39.146372   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fcm6w" [fcafa25f-dac8-4c6c-9dee-6c155ff5f214] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:39.146381   94518 system_pods.go:89] "storage-provisioner" [0ce9ef3e-e1d3-4979-b307-1d30a38cfc5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:16:39.146407   94518 retry.go:31] will retry after 360.79099ms: missing components: kube-dns
	I1018 14:16:39.263006   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:39.276186   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:39.301149   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:39.502995   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:39.512628   94518 system_pods.go:86] 20 kube-system pods found
	I1018 14:16:39.512677   94518 system_pods.go:89] "amd-gpu-device-plugin-ps8fn" [212e7c35-e96b-474d-a839-a1234082db1e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:16:39.512690   94518 system_pods.go:89] "coredns-66bc5c9577-zsv4k" [f2e200e1-f869-49c9-9964-a7ce8b78fc36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 14:16:39.512702   94518 system_pods.go:89] "csi-hostpath-attacher-0" [3623b39a-4edb-4205-aeef-6f143a1226ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 14:16:39.512711   94518 system_pods.go:89] "csi-hostpath-resizer-0" [d1f8fabd-93ae-47ef-9455-9609361e348d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 14:16:39.512719   94518 system_pods.go:89] "csi-hostpathplugin-t8ksl" [ed3177ab-2b66-47a9-8f89-40db56cbc332] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 14:16:39.512726   94518 system_pods.go:89] "etcd-addons-493618" [b9b0c453-cead-48a5-916f-b6743b403b4d] Running
	I1018 14:16:39.512736   94518 system_pods.go:89] "kindnet-vhk9j" [f447a047-b1f9-4b54-8f86-2bceeda6e6f2] Running
	I1018 14:16:39.512742   94518 system_pods.go:89] "kube-apiserver-addons-493618" [75f188ee-c145-4c6c-8fe4-ec8c6c92a91c] Running
	I1018 14:16:39.512751   94518 system_pods.go:89] "kube-controller-manager-addons-493618" [18b03cc0-9988-4bdf-b7a2-1c4c0a50ea7c] Running
	I1018 14:16:39.512761   94518 system_pods.go:89] "kube-ingress-dns-minikube" [d97bcf56-e215-486d-bd23-84d300a41f66] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:16:39.512770   94518 system_pods.go:89] "kube-proxy-5x2v2" [43716c06-fdd2-4f68-87f9-d90cf2f16440] Running
	I1018 14:16:39.512776   94518 system_pods.go:89] "kube-scheduler-addons-493618" [dc3fc718-9168-4264-aa9e-4ce985ac1d72] Running
	I1018 14:16:39.512785   94518 system_pods.go:89] "metrics-server-85b7d694d7-hzzlq" [93c9f378-d616-4060-a537-9060c4ce996a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:16:39.512798   94518 system_pods.go:89] "nvidia-device-plugin-daemonset-w9ks6" [0837d0f2-cef9-4ff6-b233-2547cb3c5f55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:16:39.512809   94518 system_pods.go:89] "registry-6b586f9694-pdjc2" [d5f69ebc-b615-4334-8fe1-593c6fe7f496] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:16:39.512817   94518 system_pods.go:89] "registry-creds-764b6fb674-czp24" [a3c3218a-127e-4d0d-90f6-a2b735fc7c5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:16:39.512828   94518 system_pods.go:89] "registry-proxy-dddz6" [55ecd78e-f023-433a-80aa-4a98aed74734] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:16:39.512838   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8ftdc" [41ec4b8b-e0f1-4aed-a826-e4d50c52e35d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:39.512850   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fcm6w" [fcafa25f-dac8-4c6c-9dee-6c155ff5f214] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:39.512858   94518 system_pods.go:89] "storage-provisioner" [0ce9ef3e-e1d3-4979-b307-1d30a38cfc5e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 14:16:39.512881   94518 retry.go:31] will retry after 432.482193ms: missing components: kube-dns
	I1018 14:16:39.762902   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:39.776402   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:39.801542   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:39.950641   94518 system_pods.go:86] 20 kube-system pods found
	I1018 14:16:39.950687   94518 system_pods.go:89] "amd-gpu-device-plugin-ps8fn" [212e7c35-e96b-474d-a839-a1234082db1e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 14:16:39.950695   94518 system_pods.go:89] "coredns-66bc5c9577-zsv4k" [f2e200e1-f869-49c9-9964-a7ce8b78fc36] Running
	I1018 14:16:39.950708   94518 system_pods.go:89] "csi-hostpath-attacher-0" [3623b39a-4edb-4205-aeef-6f143a1226ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 14:16:39.950716   94518 system_pods.go:89] "csi-hostpath-resizer-0" [d1f8fabd-93ae-47ef-9455-9609361e348d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 14:16:39.950726   94518 system_pods.go:89] "csi-hostpathplugin-t8ksl" [ed3177ab-2b66-47a9-8f89-40db56cbc332] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 14:16:39.950733   94518 system_pods.go:89] "etcd-addons-493618" [b9b0c453-cead-48a5-916f-b6743b403b4d] Running
	I1018 14:16:39.950743   94518 system_pods.go:89] "kindnet-vhk9j" [f447a047-b1f9-4b54-8f86-2bceeda6e6f2] Running
	I1018 14:16:39.950755   94518 system_pods.go:89] "kube-apiserver-addons-493618" [75f188ee-c145-4c6c-8fe4-ec8c6c92a91c] Running
	I1018 14:16:39.950767   94518 system_pods.go:89] "kube-controller-manager-addons-493618" [18b03cc0-9988-4bdf-b7a2-1c4c0a50ea7c] Running
	I1018 14:16:39.950776   94518 system_pods.go:89] "kube-ingress-dns-minikube" [d97bcf56-e215-486d-bd23-84d300a41f66] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 14:16:39.950795   94518 system_pods.go:89] "kube-proxy-5x2v2" [43716c06-fdd2-4f68-87f9-d90cf2f16440] Running
	I1018 14:16:39.950805   94518 system_pods.go:89] "kube-scheduler-addons-493618" [dc3fc718-9168-4264-aa9e-4ce985ac1d72] Running
	I1018 14:16:39.950813   94518 system_pods.go:89] "metrics-server-85b7d694d7-hzzlq" [93c9f378-d616-4060-a537-9060c4ce996a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 14:16:39.950825   94518 system_pods.go:89] "nvidia-device-plugin-daemonset-w9ks6" [0837d0f2-cef9-4ff6-b233-2547cb3c5f55] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 14:16:39.950837   94518 system_pods.go:89] "registry-6b586f9694-pdjc2" [d5f69ebc-b615-4334-8fe1-593c6fe7f496] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 14:16:39.950844   94518 system_pods.go:89] "registry-creds-764b6fb674-czp24" [a3c3218a-127e-4d0d-90f6-a2b735fc7c5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 14:16:39.950855   94518 system_pods.go:89] "registry-proxy-dddz6" [55ecd78e-f023-433a-80aa-4a98aed74734] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 14:16:39.950864   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8ftdc" [41ec4b8b-e0f1-4aed-a826-e4d50c52e35d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:39.950878   94518 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fcm6w" [fcafa25f-dac8-4c6c-9dee-6c155ff5f214] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 14:16:39.950883   94518 system_pods.go:89] "storage-provisioner" [0ce9ef3e-e1d3-4979-b307-1d30a38cfc5e] Running
	I1018 14:16:39.950903   94518 system_pods.go:126] duration metric: took 1.133578445s to wait for k8s-apps to be running ...
	I1018 14:16:39.950927   94518 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 14:16:39.950986   94518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 14:16:39.969681   94518 system_svc.go:56] duration metric: took 18.745966ms WaitForService to wait for kubelet
	I1018 14:16:39.969710   94518 kubeadm.go:586] duration metric: took 42.30276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 14:16:39.969733   94518 node_conditions.go:102] verifying NodePressure condition ...
	I1018 14:16:39.972886   94518 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 14:16:39.972931   94518 node_conditions.go:123] node cpu capacity is 8
	I1018 14:16:39.972952   94518 node_conditions.go:105] duration metric: took 3.212854ms to run NodePressure ...
	I1018 14:16:39.972976   94518 start.go:241] waiting for startup goroutines ...
	I1018 14:16:40.002066   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:40.262894   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:40.276088   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:40.300675   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:40.501979   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:40.762663   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:40.775357   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:40.801162   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:41.001217   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:41.263258   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:41.276712   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:41.302030   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:41.501566   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:41.763346   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:41.776428   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:41.864042   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:42.002413   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:42.261523   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:42.275424   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:42.301128   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:42.501233   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:42.762642   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:42.775674   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:42.801398   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:43.002340   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:43.262615   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:43.275813   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:43.301739   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:43.501955   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:43.762643   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:43.775323   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:43.801232   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:44.000775   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:44.262060   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:44.276189   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:44.300886   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:44.502251   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:44.764473   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:44.778601   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:44.801574   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:45.002597   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:45.262417   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:45.276012   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:45.300998   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:45.502358   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:45.762909   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:45.776217   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:45.801374   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:46.001819   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:46.262735   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:46.276581   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:46.301959   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:46.502478   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:46.762137   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:46.776205   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:46.800977   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:47.002011   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:47.263363   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:47.275692   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:47.301849   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:47.502303   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:47.762097   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:47.776163   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:47.801288   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:48.001490   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:48.261703   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:48.276059   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:48.301046   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:48.501699   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:48.762014   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:48.776050   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:48.801136   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:49.003122   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:49.262958   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:49.276638   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:49.301711   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:49.504298   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:49.762891   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:49.776580   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:49.801807   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:50.002042   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:50.262618   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:50.275672   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:50.301314   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:50.501039   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:50.762127   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:50.775584   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:50.801981   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:51.002167   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:51.263088   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:51.276354   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:51.301136   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:51.480427   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:16:51.502052   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:51.762057   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:51.775898   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:51.801122   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:52.000897   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 14:16:52.028927   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:52.028967   94518 retry.go:31] will retry after 23.730297472s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:16:52.262296   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:52.276403   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:52.301168   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:52.502234   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:52.762724   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:52.776030   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:52.800809   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:53.002194   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:53.263147   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:53.276322   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:53.301440   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:53.501640   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:53.762159   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:53.780927   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:53.801573   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:54.001940   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:54.262129   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:54.275901   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:54.300784   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:54.502117   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:54.762236   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:54.863504   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:54.863546   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:55.001421   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:55.263239   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:55.276598   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:55.301592   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:55.502021   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:55.762642   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:55.775215   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:55.801168   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:56.001789   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:56.262562   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:56.276012   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:56.301105   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:56.501757   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:56.762498   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:56.842533   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:56.842884   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:57.002277   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:57.263015   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:57.275626   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:57.301290   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:57.501174   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:57.764069   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:57.777805   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:57.802024   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:58.001971   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:58.262456   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:58.276292   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:58.301340   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:58.501658   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:58.763184   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:58.776640   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:58.801759   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:59.002068   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:59.275369   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:59.276620   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:59.301023   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:16:59.501710   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:16:59.763756   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:16:59.865186   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:16:59.865222   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:00.002706   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:00.265539   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:00.279599   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:00.301880   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:00.502335   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:00.763538   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:00.775930   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:00.801897   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:01.002519   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:01.262026   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:01.276130   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:01.362572   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:01.501369   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:01.763644   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:01.779108   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:01.801020   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:02.001535   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:02.262634   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:02.276612   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:02.303963   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:02.501305   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:02.762496   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:02.776181   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:02.801068   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:03.002743   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:03.262111   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:03.276934   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:03.300828   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:03.504229   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:03.763691   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:03.776119   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:03.800631   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:04.003713   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:04.262687   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:04.276482   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:04.301743   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:04.502068   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:04.763078   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:04.776689   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:04.802101   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:05.001886   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:05.262410   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:05.276337   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:05.307319   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:05.501644   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:05.762053   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:05.776369   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:05.801797   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:06.002447   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:06.262193   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:06.275849   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:06.302174   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:06.502353   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:06.762956   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:06.776611   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:06.801155   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:07.001449   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:07.262841   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:07.276120   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:07.301192   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:07.502865   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:07.762883   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:07.776486   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:07.801984   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:08.002204   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:08.262684   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:08.275841   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:08.300609   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:08.501552   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:08.761868   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:08.777284   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:08.801575   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:09.002088   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:09.262321   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:09.275116   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:09.300794   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:09.502103   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:09.763105   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:09.775593   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:09.802027   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:10.002530   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:10.262721   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:10.363567   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:10.363604   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:10.501248   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:10.762594   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:10.775272   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:10.828298   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:11.002160   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:11.262832   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:11.275989   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:11.300855   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:11.504707   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:11.762245   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:11.776332   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:11.801408   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:12.002170   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:12.262266   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:12.276626   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:12.301680   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:12.502059   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:12.762293   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:12.776456   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:12.801320   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:13.001785   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:13.262871   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:13.276298   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:13.302882   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 14:17:13.503814   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:13.762416   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:13.844416   94518 kapi.go:107] duration metric: took 1m15.046903502s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 14:17:13.845081   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:14.002739   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:14.262420   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:14.276625   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:14.501876   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:14.763082   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:14.776373   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:15.002215   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:15.262541   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:15.275951   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:15.503027   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:15.759384   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 14:17:15.762301   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:15.776692   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:16.002515   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:16.262732   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:16.275795   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 14:17:16.451253   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:17:16.451302   94518 retry.go:31] will retry after 39.128992898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 14:17:16.501604   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:16.763396   94518 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 14:17:16.775487   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:17.004186   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:17.262984   94518 kapi.go:107] duration metric: took 1m18.00445624s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 14:17:17.276176   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:17.501480   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:17.776270   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:18.002634   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:18.276586   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:18.501658   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:18.776313   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:19.001775   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:19.276193   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:19.502728   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:19.776495   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:20.000907   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:20.276522   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:20.501176   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:20.775718   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:21.002256   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:21.276110   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:21.502718   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 14:17:21.776475   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:22.001349   94518 kapi.go:107] duration metric: took 1m16.00324245s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 14:17:22.003029   94518 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-493618 cluster.
	I1018 14:17:22.004220   94518 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 14:17:22.005269   94518 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 14:17:22.276180   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:22.776487   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:23.276181   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:23.777075   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:24.308479   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:24.777192   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:25.275835   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:25.777029   94518 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 14:17:26.276794   94518 kapi.go:107] duration metric: took 1m26.504622464s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 14:17:55.584930   94518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 14:17:56.123857   94518 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 14:17:56.124019   94518 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 14:17:56.126955   94518 out.go:179] * Enabled addons: storage-provisioner, cloud-spanner, registry-creds, ingress-dns, metrics-server, nvidia-device-plugin, amd-gpu-device-plugin, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1018 14:17:56.127992   94518 addons.go:514] duration metric: took 1m58.460970758s for enable addons: enabled=[storage-provisioner cloud-spanner registry-creds ingress-dns metrics-server nvidia-device-plugin amd-gpu-device-plugin yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1018 14:17:56.128052   94518 start.go:246] waiting for cluster config update ...
	I1018 14:17:56.128083   94518 start.go:255] writing updated cluster config ...
	I1018 14:17:56.128406   94518 ssh_runner.go:195] Run: rm -f paused
	I1018 14:17:56.132411   94518 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 14:17:56.136263   94518 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zsv4k" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.140509   94518 pod_ready.go:94] pod "coredns-66bc5c9577-zsv4k" is "Ready"
	I1018 14:17:56.140532   94518 pod_ready.go:86] duration metric: took 4.248281ms for pod "coredns-66bc5c9577-zsv4k" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.142491   94518 pod_ready.go:83] waiting for pod "etcd-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.146289   94518 pod_ready.go:94] pod "etcd-addons-493618" is "Ready"
	I1018 14:17:56.146311   94518 pod_ready.go:86] duration metric: took 3.8003ms for pod "etcd-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.148001   94518 pod_ready.go:83] waiting for pod "kube-apiserver-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.151493   94518 pod_ready.go:94] pod "kube-apiserver-addons-493618" is "Ready"
	I1018 14:17:56.151516   94518 pod_ready.go:86] duration metric: took 3.485308ms for pod "kube-apiserver-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.153295   94518 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.536543   94518 pod_ready.go:94] pod "kube-controller-manager-addons-493618" is "Ready"
	I1018 14:17:56.536571   94518 pod_ready.go:86] duration metric: took 383.254622ms for pod "kube-controller-manager-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:56.736793   94518 pod_ready.go:83] waiting for pod "kube-proxy-5x2v2" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:57.136427   94518 pod_ready.go:94] pod "kube-proxy-5x2v2" is "Ready"
	I1018 14:17:57.136456   94518 pod_ready.go:86] duration metric: took 399.638474ms for pod "kube-proxy-5x2v2" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:57.336271   94518 pod_ready.go:83] waiting for pod "kube-scheduler-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:57.736585   94518 pod_ready.go:94] pod "kube-scheduler-addons-493618" is "Ready"
	I1018 14:17:57.736613   94518 pod_ready.go:86] duration metric: took 400.31858ms for pod "kube-scheduler-addons-493618" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 14:17:57.736623   94518 pod_ready.go:40] duration metric: took 1.604180528s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 14:17:57.782211   94518 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 14:17:57.783876   94518 out.go:179] * Done! kubectl is now configured to use "addons-493618" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 14:17:51 addons-493618 crio[781]: time="2025-10-18T14:17:51.543709334Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 14:17:51 addons-493618 crio[781]: time="2025-10-18T14:17:51.543758834Z" level=info msg="Removed pod sandbox: 06966d20ba6dc8d35d11e1eae30514420929523600dd243203e21f60a54750e2" id=5ca51e9c-4c72-4600-996f-af4fe184f153 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 14:17:58 addons-493618 crio[781]: time="2025-10-18T14:17:58.601802572Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f4f53fe2-7646-43de-a905-ceb5a84b674c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 14:17:58 addons-493618 crio[781]: time="2025-10-18T14:17:58.601953326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 14:17:58 addons-493618 crio[781]: time="2025-10-18T14:17:58.608320253Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:097945ff6ffefec42ce8d6c0dc1764f6a2ef212c9d8939e1d25f883b3b8f75fc UID:7b11849d-f2f9-4652-b676-2eed786a2a6c NetNS:/var/run/netns/ba7145d7-196f-4f1e-9aa7-b34e86444e3f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000988bf0}] Aliases:map[]}"
	Oct 18 14:17:58 addons-493618 crio[781]: time="2025-10-18T14:17:58.608360832Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 14:17:58 addons-493618 crio[781]: time="2025-10-18T14:17:58.620148856Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:097945ff6ffefec42ce8d6c0dc1764f6a2ef212c9d8939e1d25f883b3b8f75fc UID:7b11849d-f2f9-4652-b676-2eed786a2a6c NetNS:/var/run/netns/ba7145d7-196f-4f1e-9aa7-b34e86444e3f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000988bf0}] Aliases:map[]}"
	Oct 18 14:17:58 addons-493618 crio[781]: time="2025-10-18T14:17:58.620293439Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 14:17:58 addons-493618 crio[781]: time="2025-10-18T14:17:58.621353451Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 14:17:58 addons-493618 crio[781]: time="2025-10-18T14:17:58.622181693Z" level=info msg="Ran pod sandbox 097945ff6ffefec42ce8d6c0dc1764f6a2ef212c9d8939e1d25f883b3b8f75fc with infra container: default/busybox/POD" id=f4f53fe2-7646-43de-a905-ceb5a84b674c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 14:17:58 addons-493618 crio[781]: time="2025-10-18T14:17:58.623510703Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=173f792c-8f32-4821-a10b-81f25cb3f768 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:17:58 addons-493618 crio[781]: time="2025-10-18T14:17:58.623645886Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=173f792c-8f32-4821-a10b-81f25cb3f768 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:17:58 addons-493618 crio[781]: time="2025-10-18T14:17:58.623680604Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=173f792c-8f32-4821-a10b-81f25cb3f768 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:17:58 addons-493618 crio[781]: time="2025-10-18T14:17:58.624357895Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=58c35e4b-5920-4711-a23f-b3cf49c6f6ce name=/runtime.v1.ImageService/PullImage
	Oct 18 14:17:58 addons-493618 crio[781]: time="2025-10-18T14:17:58.625852222Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 14:18:00 addons-493618 crio[781]: time="2025-10-18T14:18:00.822576839Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=58c35e4b-5920-4711-a23f-b3cf49c6f6ce name=/runtime.v1.ImageService/PullImage
	Oct 18 14:18:00 addons-493618 crio[781]: time="2025-10-18T14:18:00.823294728Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5c9276ef-5e6d-47bd-9210-0a1d10f82612 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:18:00 addons-493618 crio[781]: time="2025-10-18T14:18:00.824687626Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d36dbda1-13d7-4d3d-a29b-ac5c36550d82 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:18:00 addons-493618 crio[781]: time="2025-10-18T14:18:00.828378189Z" level=info msg="Creating container: default/busybox/busybox" id=6dbb903e-4565-4053-afbf-23c7010eb31c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 14:18:00 addons-493618 crio[781]: time="2025-10-18T14:18:00.828947882Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 14:18:00 addons-493618 crio[781]: time="2025-10-18T14:18:00.834090434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 14:18:00 addons-493618 crio[781]: time="2025-10-18T14:18:00.83451955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 14:18:00 addons-493618 crio[781]: time="2025-10-18T14:18:00.878398075Z" level=info msg="Created container f0ed3f5d6ffa860f92501238dc1eb53e047c8685f30df6c0ac74d024d5acd313: default/busybox/busybox" id=6dbb903e-4565-4053-afbf-23c7010eb31c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 14:18:00 addons-493618 crio[781]: time="2025-10-18T14:18:00.8790508Z" level=info msg="Starting container: f0ed3f5d6ffa860f92501238dc1eb53e047c8685f30df6c0ac74d024d5acd313" id=77d59013-eb61-428a-9dc6-2034e24af076 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 14:18:00 addons-493618 crio[781]: time="2025-10-18T14:18:00.881596759Z" level=info msg="Started container" PID=6548 containerID=f0ed3f5d6ffa860f92501238dc1eb53e047c8685f30df6c0ac74d024d5acd313 description=default/busybox/busybox id=77d59013-eb61-428a-9dc6-2034e24af076 name=/runtime.v1.RuntimeService/StartContainer sandboxID=097945ff6ffefec42ce8d6c0dc1764f6a2ef212c9d8939e1d25f883b3b8f75fc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	f0ed3f5d6ffa8       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   097945ff6ffef       busybox                                     default
	fcb7161ee1d1b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          42 seconds ago       Running             csi-snapshotter                          0                   d3d06bcc5d099       csi-hostpathplugin-t8ksl                    kube-system
	8f357a51c6b5d       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          43 seconds ago       Running             csi-provisioner                          0                   d3d06bcc5d099       csi-hostpathplugin-t8ksl                    kube-system
	530e145d6c2e0       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            45 seconds ago       Running             liveness-probe                           0                   d3d06bcc5d099       csi-hostpathplugin-t8ksl                    kube-system
	84cd4c11831db       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           45 seconds ago       Running             hostpath                                 0                   d3d06bcc5d099       csi-hostpathplugin-t8ksl                    kube-system
	edfb43ced2e1e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 46 seconds ago       Running             gcp-auth                                 0                   0c4aa9fe754c5       gcp-auth-78565c9fb4-mwgsp                   gcp-auth
	fcf3c24788988       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            48 seconds ago       Running             gadget                                   0                   0c73b5d5a20a9       gadget-vm8lx                                gadget
	10ae25ecd1d90       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                51 seconds ago       Running             node-driver-registrar                    0                   d3d06bcc5d099       csi-hostpathplugin-t8ksl                    kube-system
	45501fab46f05       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             51 seconds ago       Running             controller                               0                   3e90b0db82f21       ingress-nginx-controller-675c5ddd98-sndwh   ingress-nginx
	50a19f5b596d4       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              55 seconds ago       Running             registry-proxy                           0                   5ce9bbd315430       registry-proxy-dddz6                        kube-system
	859d5d72eef12       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     57 seconds ago       Running             amd-gpu-device-plugin                    0                   ce015c134568b       amd-gpu-device-plugin-ps8fn                 kube-system
	78aea4ac76ed2       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     59 seconds ago       Running             nvidia-device-plugin-ctr                 0                   d601227de066c       nvidia-device-plugin-daemonset-w9ks6        kube-system
	775733aea8bf0       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   About a minute ago   Running             csi-external-health-monitor-controller   0                   d3d06bcc5d099       csi-hostpathplugin-t8ksl                    kube-system
	32ea63c74de31       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago   Running             yakd                                     0                   06ef25b517353       yakd-dashboard-5ff678cb9-cqgkj              yakd-dashboard
	6673efa077656       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   dec41ec76cd03       csi-hostpath-resizer-0                      kube-system
	89679d50a3910       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   0048d743f42d1       csi-hostpath-attacher-0                     kube-system
	c52d44cde4f71       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   b534c52d0c84c       snapshot-controller-7d9fbc56b8-fcm6w        kube-system
	6883ad86fcecd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              patch                                    0                   a08859b82414b       ingress-nginx-admission-patch-vxb5f         ingress-nginx
	a9e1fbf487f51       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               About a minute ago   Running             cloud-spanner-emulator                   0                   69532574c7971       cloud-spanner-emulator-86bd5cbb97-2nxxs     default
	8e896cc7ee32d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              create                                   0                   f011cb8ba518a       ingress-nginx-admission-create-tnv6j        ingress-nginx
	92ceaca691f51       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   2ee75d4e4001f       snapshot-controller-7d9fbc56b8-8ftdc        kube-system
	da0ddb2d0550b       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   c8aaf317eece5       kube-ingress-dns-minikube                   kube-system
	79474cdc2efcd       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   89516e7730f54       local-path-provisioner-648f6765c9-xgggg     local-path-storage
	a51f3eea29502       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   97f317fc1b5dc       registry-6b586f9694-pdjc2                   kube-system
	ca1869e801d6e       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   8f3ce70811032       metrics-server-85b7d694d7-hzzlq             kube-system
	7fc1c430e912b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   4107d196d2062       coredns-66bc5c9577-zsv4k                    kube-system
	d41651660ae84       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   ffc42416a6b3e       storage-provisioner                         kube-system
	778f4f35207fc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             2 minutes ago        Running             kindnet-cni                              0                   5b6cacbfc954b       kindnet-vhk9j                               kube-system
	fc19fe3563e01       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             2 minutes ago        Running             kube-proxy                               0                   ff4d1c0bbd1d6       kube-proxy-5x2v2                            kube-system
	f616a2d4df678       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   9bbc44f90a4b5       kube-apiserver-addons-493618                kube-system
	411a5716e9150       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   56968af9a8607       etcd-addons-493618                          kube-system
	857014c2e77ee       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   3e0b656b74b60       kube-scheduler-addons-493618                kube-system
	aa8c1cbd9ac9c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   a4c04910854cf       kube-controller-manager-addons-493618       kube-system
	
	
	==> coredns [7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca] <==
	[INFO] 10.244.0.18:34110 - 46291 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003216543s
	[INFO] 10.244.0.18:36936 - 61726 "A IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000087774s
	[INFO] 10.244.0.18:36936 - 61981 "AAAA IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000144707s
	[INFO] 10.244.0.18:45785 - 64672 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000050463s
	[INFO] 10.244.0.18:45785 - 64364 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000068892s
	[INFO] 10.244.0.18:56105 - 58209 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000100102s
	[INFO] 10.244.0.18:56105 - 57967 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000119544s
	[INFO] 10.244.0.18:50533 - 33100 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000132941s
	[INFO] 10.244.0.18:50533 - 33283 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000221825s
	[INFO] 10.244.0.22:33521 - 55538 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000215098s
	[INFO] 10.244.0.22:47080 - 33256 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000317199s
	[INFO] 10.244.0.22:56484 - 40851 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000134629s
	[INFO] 10.244.0.22:44035 - 10720 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000128571s
	[INFO] 10.244.0.22:40058 - 43405 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000131091s
	[INFO] 10.244.0.22:52140 - 12999 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000152446s
	[INFO] 10.244.0.22:44974 - 45426 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003453494s
	[INFO] 10.244.0.22:51430 - 52157 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003530813s
	[INFO] 10.244.0.22:37490 - 40808 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004300165s
	[INFO] 10.244.0.22:57612 - 51632 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004894861s
	[INFO] 10.244.0.22:33801 - 2038 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004753797s
	[INFO] 10.244.0.22:51344 - 53286 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006885147s
	[INFO] 10.244.0.22:52656 - 1987 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005330183s
	[INFO] 10.244.0.22:38256 - 15835 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006395765s
	[INFO] 10.244.0.22:55111 - 46405 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000941313s
	[INFO] 10.244.0.22:46598 - 43189 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001357914s
	
	
	==> describe nodes <==
	Name:               addons-493618
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-493618
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=addons-493618
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T14_15_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-493618
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-493618"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 14:15:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-493618
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 14:18:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 14:17:54 +0000   Sat, 18 Oct 2025 14:15:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 14:17:54 +0000   Sat, 18 Oct 2025 14:15:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 14:17:54 +0000   Sat, 18 Oct 2025 14:15:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 14:17:54 +0000   Sat, 18 Oct 2025 14:16:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-493618
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                c99ec94e-dad8-466b-986d-f557d98b8e1c
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-86bd5cbb97-2nxxs      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  gadget                      gadget-vm8lx                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  gcp-auth                    gcp-auth-78565c9fb4-mwgsp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-sndwh    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         2m9s
	  kube-system                 amd-gpu-device-plugin-ps8fn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-zsv4k                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m11s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 csi-hostpathplugin-t8ksl                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 etcd-addons-493618                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m17s
	  kube-system                 kindnet-vhk9j                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m11s
	  kube-system                 kube-apiserver-addons-493618                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-controller-manager-addons-493618        200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-proxy-5x2v2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-scheduler-addons-493618                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 metrics-server-85b7d694d7-hzzlq              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         2m10s
	  kube-system                 nvidia-device-plugin-daemonset-w9ks6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 registry-6b586f9694-pdjc2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 registry-creds-764b6fb674-czp24              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 registry-proxy-dddz6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 snapshot-controller-7d9fbc56b8-8ftdc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 snapshot-controller-7d9fbc56b8-fcm6w         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  local-path-storage          local-path-provisioner-648f6765c9-xgggg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-cqgkj               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     2m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m10s  kube-proxy       
	  Normal  Starting                 2m17s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m17s  kubelet          Node addons-493618 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m17s  kubelet          Node addons-493618 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m17s  kubelet          Node addons-493618 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m12s  node-controller  Node addons-493618 event: Registered Node addons-493618 in Controller
	  Normal  NodeReady                90s    kubelet          Node addons-493618 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 12:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001882] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395784] i8042: Warning: Keylock active
	[  +0.014256] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004439] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001035] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000894] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.001002] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000868] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001019] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.001050] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000769] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001154] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.509528] block sda: the capability attribute has been deprecated.
	[  +0.096767] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026410] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.055938] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5] <==
	{"level":"warn","ts":"2025-10-18T14:15:48.524192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.530308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.536786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.546053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.559657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.566802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.575632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.584037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.591784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.605020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.612481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.619606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.634187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.637964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.644321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.650704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:15:48.695116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:16:00.196257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:16:00.202493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:16:26.281250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:16:26.287738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:16:26.308478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:16:26.315202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39426","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T14:16:59.273487Z","caller":"traceutil/trace.go:172","msg":"trace[603722411] transaction","detail":"{read_only:false; response_revision:1051; number_of_response:1; }","duration":"100.664551ms","start":"2025-10-18T14:16:59.172784Z","end":"2025-10-18T14:16:59.273449Z","steps":["trace[603722411] 'process raft request'  (duration: 100.381339ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T14:17:24.306442Z","caller":"traceutil/trace.go:172","msg":"trace[1562610933] transaction","detail":"{read_only:false; response_revision:1203; number_of_response:1; }","duration":"100.640382ms","start":"2025-10-18T14:17:24.205781Z","end":"2025-10-18T14:17:24.306422Z","steps":["trace[1562610933] 'process raft request'  (duration: 64.205106ms)","trace[1562610933] 'compare'  (duration: 36.281867ms)"],"step_count":2}
	
	
	==> gcp-auth [edfb43ced2e1e4c4fbb178805c38e20bf5073a4864e99ecf580aa951e010b54f] <==
	2025/10/18 14:17:21 GCP Auth Webhook started!
	2025/10/18 14:17:58 Ready to marshal response ...
	2025/10/18 14:17:58 Ready to write response ...
	2025/10/18 14:17:58 Ready to marshal response ...
	2025/10/18 14:17:58 Ready to write response ...
	2025/10/18 14:17:58 Ready to marshal response ...
	2025/10/18 14:17:58 Ready to write response ...
	
	
	==> kernel <==
	 14:18:08 up  2:00,  0 user,  load average: 1.07, 2.54, 2.85
	Linux addons-493618 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750] <==
	E1018 14:16:28.154835       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 14:16:28.157218       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 14:16:29.756015       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 14:16:29.756051       1 metrics.go:72] Registering metrics
	I1018 14:16:29.756106       1 controller.go:711] "Syncing nftables rules"
	I1018 14:16:38.060488       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:16:38.060548       1 main.go:301] handling current node
	I1018 14:16:48.061227       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:16:48.061271       1 main.go:301] handling current node
	I1018 14:16:58.060990       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:16:58.061028       1 main.go:301] handling current node
	I1018 14:17:08.061209       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:17:08.061249       1 main.go:301] handling current node
	I1018 14:17:18.061529       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:17:18.061597       1 main.go:301] handling current node
	I1018 14:17:28.060998       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:17:28.061044       1 main.go:301] handling current node
	I1018 14:17:38.060498       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:17:38.060552       1 main.go:301] handling current node
	I1018 14:17:48.061258       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:17:48.061286       1 main.go:301] handling current node
	I1018 14:17:58.061164       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:17:58.061202       1 main.go:301] handling current node
	I1018 14:18:08.061049       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:18:08.061086       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4] <==
	I1018 14:16:05.936086       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.108.149.8"}
	W1018 14:16:26.281182       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 14:16:26.287669       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 14:16:26.308382       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 14:16:26.315041       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 14:16:38.576682       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.149.8:443: connect: connection refused
	E1018 14:16:38.576731       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.149.8:443: connect: connection refused" logger="UnhandledError"
	W1018 14:16:38.576868       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.149.8:443: connect: connection refused
	E1018 14:16:38.576902       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.149.8:443: connect: connection refused" logger="UnhandledError"
	W1018 14:16:38.600334       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.149.8:443: connect: connection refused
	E1018 14:16:38.600374       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.149.8:443: connect: connection refused" logger="UnhandledError"
	W1018 14:16:38.600902       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.149.8:443: connect: connection refused
	E1018 14:16:38.600965       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.149.8:443: connect: connection refused" logger="UnhandledError"
	E1018 14:16:41.703457       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.161.37:443: connect: connection refused" logger="UnhandledError"
	W1018 14:16:41.703665       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 14:16:41.703731       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1018 14:16:41.704079       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.161.37:443: connect: connection refused" logger="UnhandledError"
	E1018 14:16:41.709516       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.161.37:443: connect: connection refused" logger="UnhandledError"
	E1018 14:16:41.731124       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.161.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.161.37:443: connect: connection refused" logger="UnhandledError"
	I1018 14:16:41.803282       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 14:18:06.446462       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36446: use of closed network connection
	E1018 14:18:06.603755       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36468: use of closed network connection
	
	
	==> kube-controller-manager [aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8] <==
	I1018 14:15:56.264599       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 14:15:56.264698       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 14:15:56.265922       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 14:15:56.268232       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 14:15:56.268288       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 14:15:56.268335       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 14:15:56.268348       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 14:15:56.268355       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 14:15:56.268387       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 14:15:56.269609       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:15:56.269629       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 14:15:56.269638       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 14:15:56.269971       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 14:15:56.275422       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-493618" podCIDRs=["10.244.0.0/24"]
	I1018 14:15:56.277385       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 14:15:56.289378       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 14:15:58.850088       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1018 14:16:26.274934       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 14:16:26.275118       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 14:16:26.275191       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 14:16:26.299136       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 14:16:26.302741       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 14:16:26.376108       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 14:16:26.403598       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:16:41.219427       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa] <==
	I1018 14:15:57.532244       1 server_linux.go:53] "Using iptables proxy"
	I1018 14:15:57.592753       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 14:15:57.697045       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:15:57.697101       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 14:15:57.697216       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:15:57.841695       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 14:15:57.841901       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:15:57.911876       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:15:57.922658       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:15:57.939484       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:15:57.952373       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:15:57.952400       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:15:57.952456       1 config.go:200] "Starting service config controller"
	I1018 14:15:57.952467       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:15:57.952500       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:15:57.952508       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:15:57.954225       1 config.go:309] "Starting node config controller"
	I1018 14:15:57.954269       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:15:57.954278       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:15:58.053620       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 14:15:58.053669       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:15:58.053697       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1] <==
	E1018 14:15:49.134247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:15:49.134258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:15:49.134330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 14:15:49.134307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:15:49.134338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 14:15:49.134328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 14:15:49.134351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:15:49.134453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:15:49.134460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 14:15:49.946543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 14:15:49.998890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:15:50.032174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:15:50.063609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:15:50.072057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 14:15:50.134634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 14:15:50.154988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:15:50.166165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 14:15:50.179329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 14:15:50.235814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 14:15:50.269111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:15:50.270159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:15:50.295510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 14:15:50.353863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 14:15:50.392021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1018 14:15:52.930460       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 14:17:09 addons-493618 kubelet[1280]: I1018 14:17:09.826280    1280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-w9ks6" podStartSLOduration=2.077740894 podStartE2EDuration="31.826256111s" podCreationTimestamp="2025-10-18 14:16:38 +0000 UTC" firstStartedPulling="2025-10-18 14:16:39.033437222 +0000 UTC m=+47.591690209" lastFinishedPulling="2025-10-18 14:17:08.78195244 +0000 UTC m=+77.340205426" observedRunningTime="2025-10-18 14:17:09.825897558 +0000 UTC m=+78.384150555" watchObservedRunningTime="2025-10-18 14:17:09.826256111 +0000 UTC m=+78.384509106"
	Oct 18 14:17:10 addons-493618 kubelet[1280]: E1018 14:17:10.576787    1280 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 18 14:17:10 addons-493618 kubelet[1280]: E1018 14:17:10.576875    1280 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3c3218a-127e-4d0d-90f6-a2b735fc7c5c-gcr-creds podName:a3c3218a-127e-4d0d-90f6-a2b735fc7c5c nodeName:}" failed. No retries permitted until 2025-10-18 14:17:42.576860527 +0000 UTC m=+111.135113513 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/a3c3218a-127e-4d0d-90f6-a2b735fc7c5c-gcr-creds") pod "registry-creds-764b6fb674-czp24" (UID: "a3c3218a-127e-4d0d-90f6-a2b735fc7c5c") : secret "registry-creds-gcr" not found
	Oct 18 14:17:10 addons-493618 kubelet[1280]: I1018 14:17:10.817570    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ps8fn" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:17:10 addons-493618 kubelet[1280]: I1018 14:17:10.818953    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-w9ks6" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:17:10 addons-493618 kubelet[1280]: I1018 14:17:10.828993    1280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-ps8fn" podStartSLOduration=1.639228758 podStartE2EDuration="32.828971892s" podCreationTimestamp="2025-10-18 14:16:38 +0000 UTC" firstStartedPulling="2025-10-18 14:16:39.037417054 +0000 UTC m=+47.595670042" lastFinishedPulling="2025-10-18 14:17:10.227160202 +0000 UTC m=+78.785413176" observedRunningTime="2025-10-18 14:17:10.828083888 +0000 UTC m=+79.386336883" watchObservedRunningTime="2025-10-18 14:17:10.828971892 +0000 UTC m=+79.387224904"
	Oct 18 14:17:11 addons-493618 kubelet[1280]: I1018 14:17:11.822726    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ps8fn" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:17:13 addons-493618 kubelet[1280]: I1018 14:17:13.830857    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-dddz6" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:17:13 addons-493618 kubelet[1280]: I1018 14:17:13.844390    1280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-dddz6" podStartSLOduration=2.029068369 podStartE2EDuration="35.844368897s" podCreationTimestamp="2025-10-18 14:16:38 +0000 UTC" firstStartedPulling="2025-10-18 14:16:39.052410311 +0000 UTC m=+47.610663297" lastFinishedPulling="2025-10-18 14:17:12.867710847 +0000 UTC m=+81.425963825" observedRunningTime="2025-10-18 14:17:13.843504053 +0000 UTC m=+82.401757032" watchObservedRunningTime="2025-10-18 14:17:13.844368897 +0000 UTC m=+82.402621891"
	Oct 18 14:17:14 addons-493618 kubelet[1280]: I1018 14:17:14.834202    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-dddz6" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 14:17:16 addons-493618 kubelet[1280]: I1018 14:17:16.856761    1280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-sndwh" podStartSLOduration=56.032418463 podStartE2EDuration="1m17.856738181s" podCreationTimestamp="2025-10-18 14:15:59 +0000 UTC" firstStartedPulling="2025-10-18 14:16:54.553078595 +0000 UTC m=+63.111331582" lastFinishedPulling="2025-10-18 14:17:16.377398272 +0000 UTC m=+84.935651300" observedRunningTime="2025-10-18 14:17:16.85601675 +0000 UTC m=+85.414269744" watchObservedRunningTime="2025-10-18 14:17:16.856738181 +0000 UTC m=+85.414991178"
	Oct 18 14:17:19 addons-493618 kubelet[1280]: I1018 14:17:19.870425    1280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-vm8lx" podStartSLOduration=68.484026237 podStartE2EDuration="1m20.870403762s" podCreationTimestamp="2025-10-18 14:15:59 +0000 UTC" firstStartedPulling="2025-10-18 14:17:07.251789787 +0000 UTC m=+75.810042765" lastFinishedPulling="2025-10-18 14:17:19.638167311 +0000 UTC m=+88.196420290" observedRunningTime="2025-10-18 14:17:19.870127073 +0000 UTC m=+88.428380080" watchObservedRunningTime="2025-10-18 14:17:19.870403762 +0000 UTC m=+88.428656763"
	Oct 18 14:17:21 addons-493618 kubelet[1280]: I1018 14:17:21.881854    1280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-mwgsp" podStartSLOduration=65.826033769 podStartE2EDuration="1m16.881828314s" podCreationTimestamp="2025-10-18 14:16:05 +0000 UTC" firstStartedPulling="2025-10-18 14:17:10.74890637 +0000 UTC m=+79.307159363" lastFinishedPulling="2025-10-18 14:17:21.804700934 +0000 UTC m=+90.362953908" observedRunningTime="2025-10-18 14:17:21.880369024 +0000 UTC m=+90.438622000" watchObservedRunningTime="2025-10-18 14:17:21.881828314 +0000 UTC m=+90.440081309"
	Oct 18 14:17:23 addons-493618 kubelet[1280]: I1018 14:17:23.590642    1280 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 18 14:17:23 addons-493618 kubelet[1280]: I1018 14:17:23.590686    1280 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 18 14:17:25 addons-493618 kubelet[1280]: I1018 14:17:25.911507    1280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-t8ksl" podStartSLOduration=1.805099019 podStartE2EDuration="47.911484381s" podCreationTimestamp="2025-10-18 14:16:38 +0000 UTC" firstStartedPulling="2025-10-18 14:16:39.031672536 +0000 UTC m=+47.589925525" lastFinishedPulling="2025-10-18 14:17:25.138057899 +0000 UTC m=+93.696310887" observedRunningTime="2025-10-18 14:17:25.909994322 +0000 UTC m=+94.468247318" watchObservedRunningTime="2025-10-18 14:17:25.911484381 +0000 UTC m=+94.469737376"
	Oct 18 14:17:33 addons-493618 kubelet[1280]: I1018 14:17:33.531305    1280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3633a59f-0067-4a29-9786-9336b0f21aaf" path="/var/lib/kubelet/pods/3633a59f-0067-4a29-9786-9336b0f21aaf/volumes"
	Oct 18 14:17:33 addons-493618 kubelet[1280]: I1018 14:17:33.531656    1280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1bb97ea-edcb-4fc9-8777-ed1419329d08" path="/var/lib/kubelet/pods/c1bb97ea-edcb-4fc9-8777-ed1419329d08/volumes"
	Oct 18 14:17:42 addons-493618 kubelet[1280]: E1018 14:17:42.631373    1280 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 18 14:17:42 addons-493618 kubelet[1280]: E1018 14:17:42.631488    1280 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3c3218a-127e-4d0d-90f6-a2b735fc7c5c-gcr-creds podName:a3c3218a-127e-4d0d-90f6-a2b735fc7c5c nodeName:}" failed. No retries permitted until 2025-10-18 14:18:46.631472375 +0000 UTC m=+175.189725360 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/a3c3218a-127e-4d0d-90f6-a2b735fc7c5c-gcr-creds") pod "registry-creds-764b6fb674-czp24" (UID: "a3c3218a-127e-4d0d-90f6-a2b735fc7c5c") : secret "registry-creds-gcr" not found
	Oct 18 14:17:51 addons-493618 kubelet[1280]: I1018 14:17:51.518966    1280 scope.go:117] "RemoveContainer" containerID="5cc319c39b5d2942db92fd0715f3b89bd1f98d8ab4b5c033b30d022f1e5c1cb8"
	Oct 18 14:17:51 addons-493618 kubelet[1280]: I1018 14:17:51.526714    1280 scope.go:117] "RemoveContainer" containerID="fdaf53c759693df6e70108448a7481e2608118909ea8eba27a3faea9e8b11489"
	Oct 18 14:17:58 addons-493618 kubelet[1280]: I1018 14:17:58.451318    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7b11849d-f2f9-4652-b676-2eed786a2a6c-gcp-creds\") pod \"busybox\" (UID: \"7b11849d-f2f9-4652-b676-2eed786a2a6c\") " pod="default/busybox"
	Oct 18 14:17:58 addons-493618 kubelet[1280]: I1018 14:17:58.451460    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhntn\" (UniqueName: \"kubernetes.io/projected/7b11849d-f2f9-4652-b676-2eed786a2a6c-kube-api-access-qhntn\") pod \"busybox\" (UID: \"7b11849d-f2f9-4652-b676-2eed786a2a6c\") " pod="default/busybox"
	Oct 18 14:18:01 addons-493618 kubelet[1280]: I1018 14:18:01.032048    1280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.831920115 podStartE2EDuration="3.032025969s" podCreationTimestamp="2025-10-18 14:17:58 +0000 UTC" firstStartedPulling="2025-10-18 14:17:58.623984168 +0000 UTC m=+127.182237142" lastFinishedPulling="2025-10-18 14:18:00.824090016 +0000 UTC m=+129.382342996" observedRunningTime="2025-10-18 14:18:01.031232288 +0000 UTC m=+129.589485282" watchObservedRunningTime="2025-10-18 14:18:01.032025969 +0000 UTC m=+129.590278962"
	
	
	==> storage-provisioner [d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e] <==
	W1018 14:17:43.547248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:45.550906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:45.558225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:47.561665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:47.566130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:49.569847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:49.574739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:51.578824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:51.582959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:53.586536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:53.590445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:55.593820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:55.598713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:57.602283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:57.606222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:59.609224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:17:59.614979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:18:01.618229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:18:01.623587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:18:03.626256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:18:03.631457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:18:05.635044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:18:05.639469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:18:07.642300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:18:07.647933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-493618 -n addons-493618
helpers_test.go:269: (dbg) Run:  kubectl --context addons-493618 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-tnv6j ingress-nginx-admission-patch-vxb5f registry-creds-764b6fb674-czp24
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-493618 describe pod ingress-nginx-admission-create-tnv6j ingress-nginx-admission-patch-vxb5f registry-creds-764b6fb674-czp24
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-493618 describe pod ingress-nginx-admission-create-tnv6j ingress-nginx-admission-patch-vxb5f registry-creds-764b6fb674-czp24: exit status 1 (60.700477ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-tnv6j" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vxb5f" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-czp24" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-493618 describe pod ingress-nginx-admission-create-tnv6j ingress-nginx-admission-patch-vxb5f registry-creds-764b6fb674-czp24: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-493618 addons disable headlamp --alsologtostderr -v=1: exit status 11 (233.275132ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:18:09.187603  104353 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:18:09.187759  104353 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:09.187772  104353 out.go:374] Setting ErrFile to fd 2...
	I1018 14:18:09.187778  104353 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:09.187984  104353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:18:09.188295  104353 mustload.go:65] Loading cluster: addons-493618
	I1018 14:18:09.188666  104353 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:09.188685  104353 addons.go:606] checking whether the cluster is paused
	I1018 14:18:09.188785  104353 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:09.188803  104353 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:18:09.189261  104353 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:18:09.207218  104353 ssh_runner.go:195] Run: systemctl --version
	I1018 14:18:09.207303  104353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:18:09.224479  104353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:18:09.319813  104353 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:18:09.319900  104353 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:18:09.350890  104353 cri.go:89] found id: "fcb7161ee1d1b988b7ab1002f35293f0b7091c9bac0d7bde7e8471426df926cf"
	I1018 14:18:09.350949  104353 cri.go:89] found id: "8f357a51c6b5dca76ccd31166d38783da025c5cbb4407847372de403612f8902"
	I1018 14:18:09.350955  104353 cri.go:89] found id: "530e145d6c2e072e1f708ba6909c358faf2489d1fb8aa08aa89f54faf841dbbf"
	I1018 14:18:09.350959  104353 cri.go:89] found id: "84cd4c11831db761e4325adee4879bf6a046512882f96bb1ca62c4882e6c8939"
	I1018 14:18:09.350961  104353 cri.go:89] found id: "10ae25ecd1d9000b05b1943468a2e37d7d7ad234a9cbda7cef9a4926bc9a192c"
	I1018 14:18:09.350964  104353 cri.go:89] found id: "50a19f5b596d4f200db4d0ab0cd9f04fbb94a5ef09f512aeefc81c7d4920b424"
	I1018 14:18:09.350967  104353 cri.go:89] found id: "859d5d72eef12d0f592a58b42cf0be1853b423f2c91a731d854b74ff79470e8f"
	I1018 14:18:09.350969  104353 cri.go:89] found id: "78aea4ac76ed251743d7541eaa9c63acd9c8c2af1858f2862ef1bb654b9423b3"
	I1018 14:18:09.350972  104353 cri.go:89] found id: "775733aea8bf0d456fffa861d7bbf1e97ecc24f79c468dcc0c8f760bfe0b6df0"
	I1018 14:18:09.350978  104353 cri.go:89] found id: "6673efa0776562914d4e7c35e4a4a121d60c7796edfc736b01120598fdf6dfda"
	I1018 14:18:09.350982  104353 cri.go:89] found id: "89679d50a3910712c81dd83e3ab5d310452f13a23c0dce3d8afeb4abec58b99f"
	I1018 14:18:09.350986  104353 cri.go:89] found id: "c52d44cde4f7100b3b7100db0eda92a6716001140bcb111fc1b8bad7bcffcd87"
	I1018 14:18:09.350989  104353 cri.go:89] found id: "92ceaca691f5147122ab362b3936a7201bede1ae2ac6288d04e5d0641f150e2f"
	I1018 14:18:09.350993  104353 cri.go:89] found id: "da0ddb2d0550b3553349e63e5796b3d59ac8c5a7d574c962118808b4750f4934"
	I1018 14:18:09.350996  104353 cri.go:89] found id: "a51f3eea295020493bcd77872d74756d33249e7b25591c8967346f770e49a885"
	I1018 14:18:09.351007  104353 cri.go:89] found id: "ca1869e801d6ed194d599a52234484412f6d9de897b6eadae30ca18325c92762"
	I1018 14:18:09.351015  104353 cri.go:89] found id: "7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca"
	I1018 14:18:09.351021  104353 cri.go:89] found id: "d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e"
	I1018 14:18:09.351025  104353 cri.go:89] found id: "778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750"
	I1018 14:18:09.351029  104353 cri.go:89] found id: "fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa"
	I1018 14:18:09.351040  104353 cri.go:89] found id: "f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4"
	I1018 14:18:09.351046  104353 cri.go:89] found id: "411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5"
	I1018 14:18:09.351050  104353 cri.go:89] found id: "857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1"
	I1018 14:18:09.351057  104353 cri.go:89] found id: "aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8"
	I1018 14:18:09.351059  104353 cri.go:89] found id: ""
	I1018 14:18:09.351112  104353 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 14:18:09.364937  104353 out.go:203] 
	W1018 14:18:09.366188  104353 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 14:18:09.366213  104353 out.go:285] * 
	* 
	W1018 14:18:09.371065  104353 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 14:18:09.372581  104353 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-493618 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.53s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-2nxxs" [548cea3e-84b8-45fd-bffd-b41089ed0377] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003630992s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-493618 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (238.347849ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:18:27.051968  106464 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:18:27.052207  106464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:27.052216  106464 out.go:374] Setting ErrFile to fd 2...
	I1018 14:18:27.052219  106464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:27.052442  106464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:18:27.052700  106464 mustload.go:65] Loading cluster: addons-493618
	I1018 14:18:27.053078  106464 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:27.053099  106464 addons.go:606] checking whether the cluster is paused
	I1018 14:18:27.053189  106464 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:27.053201  106464 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:18:27.053556  106464 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:18:27.072101  106464 ssh_runner.go:195] Run: systemctl --version
	I1018 14:18:27.072176  106464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:18:27.089859  106464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:18:27.188921  106464 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:18:27.189019  106464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:18:27.219467  106464 cri.go:89] found id: "fcb7161ee1d1b988b7ab1002f35293f0b7091c9bac0d7bde7e8471426df926cf"
	I1018 14:18:27.219498  106464 cri.go:89] found id: "8f357a51c6b5dca76ccd31166d38783da025c5cbb4407847372de403612f8902"
	I1018 14:18:27.219504  106464 cri.go:89] found id: "530e145d6c2e072e1f708ba6909c358faf2489d1fb8aa08aa89f54faf841dbbf"
	I1018 14:18:27.219508  106464 cri.go:89] found id: "84cd4c11831db761e4325adee4879bf6a046512882f96bb1ca62c4882e6c8939"
	I1018 14:18:27.219512  106464 cri.go:89] found id: "10ae25ecd1d9000b05b1943468a2e37d7d7ad234a9cbda7cef9a4926bc9a192c"
	I1018 14:18:27.219517  106464 cri.go:89] found id: "50a19f5b596d4f200db4d0ab0cd9f04fbb94a5ef09f512aeefc81c7d4920b424"
	I1018 14:18:27.219520  106464 cri.go:89] found id: "859d5d72eef12d0f592a58b42cf0be1853b423f2c91a731d854b74ff79470e8f"
	I1018 14:18:27.219522  106464 cri.go:89] found id: "78aea4ac76ed251743d7541eaa9c63acd9c8c2af1858f2862ef1bb654b9423b3"
	I1018 14:18:27.219525  106464 cri.go:89] found id: "775733aea8bf0d456fffa861d7bbf1e97ecc24f79c468dcc0c8f760bfe0b6df0"
	I1018 14:18:27.219531  106464 cri.go:89] found id: "6673efa0776562914d4e7c35e4a4a121d60c7796edfc736b01120598fdf6dfda"
	I1018 14:18:27.219533  106464 cri.go:89] found id: "89679d50a3910712c81dd83e3ab5d310452f13a23c0dce3d8afeb4abec58b99f"
	I1018 14:18:27.219536  106464 cri.go:89] found id: "c52d44cde4f7100b3b7100db0eda92a6716001140bcb111fc1b8bad7bcffcd87"
	I1018 14:18:27.219538  106464 cri.go:89] found id: "92ceaca691f5147122ab362b3936a7201bede1ae2ac6288d04e5d0641f150e2f"
	I1018 14:18:27.219540  106464 cri.go:89] found id: "da0ddb2d0550b3553349e63e5796b3d59ac8c5a7d574c962118808b4750f4934"
	I1018 14:18:27.219543  106464 cri.go:89] found id: "a51f3eea295020493bcd77872d74756d33249e7b25591c8967346f770e49a885"
	I1018 14:18:27.219547  106464 cri.go:89] found id: "ca1869e801d6ed194d599a52234484412f6d9de897b6eadae30ca18325c92762"
	I1018 14:18:27.219550  106464 cri.go:89] found id: "7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca"
	I1018 14:18:27.219554  106464 cri.go:89] found id: "d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e"
	I1018 14:18:27.219556  106464 cri.go:89] found id: "778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750"
	I1018 14:18:27.219560  106464 cri.go:89] found id: "fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa"
	I1018 14:18:27.219564  106464 cri.go:89] found id: "f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4"
	I1018 14:18:27.219568  106464 cri.go:89] found id: "411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5"
	I1018 14:18:27.219572  106464 cri.go:89] found id: "857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1"
	I1018 14:18:27.219577  106464 cri.go:89] found id: "aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8"
	I1018 14:18:27.219585  106464 cri.go:89] found id: ""
	I1018 14:18:27.219631  106464 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 14:18:27.233359  106464 out.go:203] 
	W1018 14:18:27.234621  106464 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 14:18:27.234639  106464 out.go:285] * 
	* 
	W1018 14:18:27.239585  106464 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 14:18:27.240939  106464 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-493618 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.09s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-493618 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-493618 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493618 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [742341bd-9fcb-4514-bffa-f6c919afcaac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [742341bd-9fcb-4514-bffa-f6c919afcaac] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [742341bd-9fcb-4514-bffa-f6c919afcaac] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.003860764s
addons_test.go:967: (dbg) Run:  kubectl --context addons-493618 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 ssh "cat /opt/local-path-provisioner/pvc-a6ac2dbf-6d84-47b0-9a9a-79b9ddfd5256_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-493618 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-493618 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-493618 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (234.416659ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:18:31.791954  106801 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:18:31.792298  106801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:31.792312  106801 out.go:374] Setting ErrFile to fd 2...
	I1018 14:18:31.792317  106801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:31.792627  106801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:18:31.793049  106801 mustload.go:65] Loading cluster: addons-493618
	I1018 14:18:31.793567  106801 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:31.793592  106801 addons.go:606] checking whether the cluster is paused
	I1018 14:18:31.793720  106801 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:31.793739  106801 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:18:31.794349  106801 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:18:31.813207  106801 ssh_runner.go:195] Run: systemctl --version
	I1018 14:18:31.813287  106801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:18:31.830580  106801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:18:31.925764  106801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:18:31.925831  106801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:18:31.958251  106801 cri.go:89] found id: "fcb7161ee1d1b988b7ab1002f35293f0b7091c9bac0d7bde7e8471426df926cf"
	I1018 14:18:31.958273  106801 cri.go:89] found id: "8f357a51c6b5dca76ccd31166d38783da025c5cbb4407847372de403612f8902"
	I1018 14:18:31.958277  106801 cri.go:89] found id: "530e145d6c2e072e1f708ba6909c358faf2489d1fb8aa08aa89f54faf841dbbf"
	I1018 14:18:31.958280  106801 cri.go:89] found id: "84cd4c11831db761e4325adee4879bf6a046512882f96bb1ca62c4882e6c8939"
	I1018 14:18:31.958283  106801 cri.go:89] found id: "10ae25ecd1d9000b05b1943468a2e37d7d7ad234a9cbda7cef9a4926bc9a192c"
	I1018 14:18:31.958286  106801 cri.go:89] found id: "50a19f5b596d4f200db4d0ab0cd9f04fbb94a5ef09f512aeefc81c7d4920b424"
	I1018 14:18:31.958295  106801 cri.go:89] found id: "859d5d72eef12d0f592a58b42cf0be1853b423f2c91a731d854b74ff79470e8f"
	I1018 14:18:31.958298  106801 cri.go:89] found id: "78aea4ac76ed251743d7541eaa9c63acd9c8c2af1858f2862ef1bb654b9423b3"
	I1018 14:18:31.958300  106801 cri.go:89] found id: "775733aea8bf0d456fffa861d7bbf1e97ecc24f79c468dcc0c8f760bfe0b6df0"
	I1018 14:18:31.958305  106801 cri.go:89] found id: "6673efa0776562914d4e7c35e4a4a121d60c7796edfc736b01120598fdf6dfda"
	I1018 14:18:31.958308  106801 cri.go:89] found id: "89679d50a3910712c81dd83e3ab5d310452f13a23c0dce3d8afeb4abec58b99f"
	I1018 14:18:31.958310  106801 cri.go:89] found id: "c52d44cde4f7100b3b7100db0eda92a6716001140bcb111fc1b8bad7bcffcd87"
	I1018 14:18:31.958312  106801 cri.go:89] found id: "92ceaca691f5147122ab362b3936a7201bede1ae2ac6288d04e5d0641f150e2f"
	I1018 14:18:31.958315  106801 cri.go:89] found id: "da0ddb2d0550b3553349e63e5796b3d59ac8c5a7d574c962118808b4750f4934"
	I1018 14:18:31.958317  106801 cri.go:89] found id: "a51f3eea295020493bcd77872d74756d33249e7b25591c8967346f770e49a885"
	I1018 14:18:31.958321  106801 cri.go:89] found id: "ca1869e801d6ed194d599a52234484412f6d9de897b6eadae30ca18325c92762"
	I1018 14:18:31.958323  106801 cri.go:89] found id: "7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca"
	I1018 14:18:31.958326  106801 cri.go:89] found id: "d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e"
	I1018 14:18:31.958329  106801 cri.go:89] found id: "778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750"
	I1018 14:18:31.958331  106801 cri.go:89] found id: "fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa"
	I1018 14:18:31.958333  106801 cri.go:89] found id: "f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4"
	I1018 14:18:31.958335  106801 cri.go:89] found id: "411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5"
	I1018 14:18:31.958338  106801 cri.go:89] found id: "857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1"
	I1018 14:18:31.958340  106801 cri.go:89] found id: "aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8"
	I1018 14:18:31.958343  106801 cri.go:89] found id: ""
	I1018 14:18:31.958388  106801 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 14:18:31.972962  106801 out.go:203] 
	W1018 14:18:31.974311  106801 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 14:18:31.974333  106801 out.go:285] * 
	* 
	W1018 14:18:31.979291  106801 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 14:18:31.980626  106801 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-493618 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (12.09s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-w9ks6" [0837d0f2-cef9-4ff6-b233-2547cb3c5f55] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003406408s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-493618 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (241.441861ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:18:11.900599  104418 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:18:11.900852  104418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:11.900867  104418 out.go:374] Setting ErrFile to fd 2...
	I1018 14:18:11.900872  104418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:11.901205  104418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:18:11.901539  104418 mustload.go:65] Loading cluster: addons-493618
	I1018 14:18:11.901921  104418 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:11.901940  104418 addons.go:606] checking whether the cluster is paused
	I1018 14:18:11.902026  104418 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:11.902039  104418 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:18:11.902466  104418 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:18:11.920304  104418 ssh_runner.go:195] Run: systemctl --version
	I1018 14:18:11.920383  104418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:18:11.939784  104418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:18:12.036410  104418 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:18:12.036499  104418 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:18:12.066084  104418 cri.go:89] found id: "fcb7161ee1d1b988b7ab1002f35293f0b7091c9bac0d7bde7e8471426df926cf"
	I1018 14:18:12.066106  104418 cri.go:89] found id: "8f357a51c6b5dca76ccd31166d38783da025c5cbb4407847372de403612f8902"
	I1018 14:18:12.066110  104418 cri.go:89] found id: "530e145d6c2e072e1f708ba6909c358faf2489d1fb8aa08aa89f54faf841dbbf"
	I1018 14:18:12.066114  104418 cri.go:89] found id: "84cd4c11831db761e4325adee4879bf6a046512882f96bb1ca62c4882e6c8939"
	I1018 14:18:12.066118  104418 cri.go:89] found id: "10ae25ecd1d9000b05b1943468a2e37d7d7ad234a9cbda7cef9a4926bc9a192c"
	I1018 14:18:12.066121  104418 cri.go:89] found id: "50a19f5b596d4f200db4d0ab0cd9f04fbb94a5ef09f512aeefc81c7d4920b424"
	I1018 14:18:12.066124  104418 cri.go:89] found id: "859d5d72eef12d0f592a58b42cf0be1853b423f2c91a731d854b74ff79470e8f"
	I1018 14:18:12.066126  104418 cri.go:89] found id: "78aea4ac76ed251743d7541eaa9c63acd9c8c2af1858f2862ef1bb654b9423b3"
	I1018 14:18:12.066129  104418 cri.go:89] found id: "775733aea8bf0d456fffa861d7bbf1e97ecc24f79c468dcc0c8f760bfe0b6df0"
	I1018 14:18:12.066133  104418 cri.go:89] found id: "6673efa0776562914d4e7c35e4a4a121d60c7796edfc736b01120598fdf6dfda"
	I1018 14:18:12.066136  104418 cri.go:89] found id: "89679d50a3910712c81dd83e3ab5d310452f13a23c0dce3d8afeb4abec58b99f"
	I1018 14:18:12.066138  104418 cri.go:89] found id: "c52d44cde4f7100b3b7100db0eda92a6716001140bcb111fc1b8bad7bcffcd87"
	I1018 14:18:12.066141  104418 cri.go:89] found id: "92ceaca691f5147122ab362b3936a7201bede1ae2ac6288d04e5d0641f150e2f"
	I1018 14:18:12.066143  104418 cri.go:89] found id: "da0ddb2d0550b3553349e63e5796b3d59ac8c5a7d574c962118808b4750f4934"
	I1018 14:18:12.066147  104418 cri.go:89] found id: "a51f3eea295020493bcd77872d74756d33249e7b25591c8967346f770e49a885"
	I1018 14:18:12.066153  104418 cri.go:89] found id: "ca1869e801d6ed194d599a52234484412f6d9de897b6eadae30ca18325c92762"
	I1018 14:18:12.066157  104418 cri.go:89] found id: "7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca"
	I1018 14:18:12.066164  104418 cri.go:89] found id: "d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e"
	I1018 14:18:12.066168  104418 cri.go:89] found id: "778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750"
	I1018 14:18:12.066171  104418 cri.go:89] found id: "fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa"
	I1018 14:18:12.066175  104418 cri.go:89] found id: "f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4"
	I1018 14:18:12.066179  104418 cri.go:89] found id: "411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5"
	I1018 14:18:12.066183  104418 cri.go:89] found id: "857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1"
	I1018 14:18:12.066186  104418 cri.go:89] found id: "aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8"
	I1018 14:18:12.066189  104418 cri.go:89] found id: ""
	I1018 14:18:12.066225  104418 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 14:18:12.080058  104418 out.go:203] 
	W1018 14:18:12.081420  104418 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 14:18:12.081439  104418 out.go:285] * 
	* 
	W1018 14:18:12.086453  104418 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 14:18:12.087948  104418 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-493618 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-cqgkj" [7ab5d53d-1423-49f2-9a5b-84ba4d0d62d5] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003600904s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-493618 addons disable yakd --alsologtostderr -v=1: exit status 11 (236.186331ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:18:18.142248  105504 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:18:18.142513  105504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:18.142523  105504 out.go:374] Setting ErrFile to fd 2...
	I1018 14:18:18.142528  105504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:18.142751  105504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:18:18.143059  105504 mustload.go:65] Loading cluster: addons-493618
	I1018 14:18:18.143428  105504 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:18.143449  105504 addons.go:606] checking whether the cluster is paused
	I1018 14:18:18.143541  105504 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:18.143556  105504 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:18:18.143979  105504 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:18:18.161925  105504 ssh_runner.go:195] Run: systemctl --version
	I1018 14:18:18.161998  105504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:18:18.180164  105504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:18:18.276908  105504 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:18:18.277024  105504 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:18:18.305591  105504 cri.go:89] found id: "fcb7161ee1d1b988b7ab1002f35293f0b7091c9bac0d7bde7e8471426df926cf"
	I1018 14:18:18.305612  105504 cri.go:89] found id: "8f357a51c6b5dca76ccd31166d38783da025c5cbb4407847372de403612f8902"
	I1018 14:18:18.305616  105504 cri.go:89] found id: "530e145d6c2e072e1f708ba6909c358faf2489d1fb8aa08aa89f54faf841dbbf"
	I1018 14:18:18.305619  105504 cri.go:89] found id: "84cd4c11831db761e4325adee4879bf6a046512882f96bb1ca62c4882e6c8939"
	I1018 14:18:18.305622  105504 cri.go:89] found id: "10ae25ecd1d9000b05b1943468a2e37d7d7ad234a9cbda7cef9a4926bc9a192c"
	I1018 14:18:18.305627  105504 cri.go:89] found id: "50a19f5b596d4f200db4d0ab0cd9f04fbb94a5ef09f512aeefc81c7d4920b424"
	I1018 14:18:18.305629  105504 cri.go:89] found id: "859d5d72eef12d0f592a58b42cf0be1853b423f2c91a731d854b74ff79470e8f"
	I1018 14:18:18.305632  105504 cri.go:89] found id: "78aea4ac76ed251743d7541eaa9c63acd9c8c2af1858f2862ef1bb654b9423b3"
	I1018 14:18:18.305635  105504 cri.go:89] found id: "775733aea8bf0d456fffa861d7bbf1e97ecc24f79c468dcc0c8f760bfe0b6df0"
	I1018 14:18:18.305644  105504 cri.go:89] found id: "6673efa0776562914d4e7c35e4a4a121d60c7796edfc736b01120598fdf6dfda"
	I1018 14:18:18.305647  105504 cri.go:89] found id: "89679d50a3910712c81dd83e3ab5d310452f13a23c0dce3d8afeb4abec58b99f"
	I1018 14:18:18.305649  105504 cri.go:89] found id: "c52d44cde4f7100b3b7100db0eda92a6716001140bcb111fc1b8bad7bcffcd87"
	I1018 14:18:18.305652  105504 cri.go:89] found id: "92ceaca691f5147122ab362b3936a7201bede1ae2ac6288d04e5d0641f150e2f"
	I1018 14:18:18.305654  105504 cri.go:89] found id: "da0ddb2d0550b3553349e63e5796b3d59ac8c5a7d574c962118808b4750f4934"
	I1018 14:18:18.305656  105504 cri.go:89] found id: "a51f3eea295020493bcd77872d74756d33249e7b25591c8967346f770e49a885"
	I1018 14:18:18.305660  105504 cri.go:89] found id: "ca1869e801d6ed194d599a52234484412f6d9de897b6eadae30ca18325c92762"
	I1018 14:18:18.305662  105504 cri.go:89] found id: "7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca"
	I1018 14:18:18.305666  105504 cri.go:89] found id: "d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e"
	I1018 14:18:18.305669  105504 cri.go:89] found id: "778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750"
	I1018 14:18:18.305671  105504 cri.go:89] found id: "fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa"
	I1018 14:18:18.305674  105504 cri.go:89] found id: "f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4"
	I1018 14:18:18.305676  105504 cri.go:89] found id: "411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5"
	I1018 14:18:18.305678  105504 cri.go:89] found id: "857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1"
	I1018 14:18:18.305681  105504 cri.go:89] found id: "aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8"
	I1018 14:18:18.305693  105504 cri.go:89] found id: ""
	I1018 14:18:18.305732  105504 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 14:18:18.320430  105504 out.go:203] 
	W1018 14:18:18.321886  105504 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 14:18:18.321931  105504 out.go:285] * 
	* 
	W1018 14:18:18.327087  105504 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 14:18:18.328907  105504 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-493618 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.24s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-ps8fn" [212e7c35-e96b-474d-a839-a1234082db1e] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.004325497s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493618 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-493618 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (274.220252ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:18:14.444850  104986 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:18:14.445041  104986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:14.445054  104986 out.go:374] Setting ErrFile to fd 2...
	I1018 14:18:14.445061  104986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:18:14.445385  104986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:18:14.445746  104986 mustload.go:65] Loading cluster: addons-493618
	I1018 14:18:14.446302  104986 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:14.446324  104986 addons.go:606] checking whether the cluster is paused
	I1018 14:18:14.446447  104986 config.go:182] Loaded profile config "addons-493618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:18:14.446466  104986 host.go:66] Checking if "addons-493618" exists ...
	I1018 14:18:14.447184  104986 cli_runner.go:164] Run: docker container inspect addons-493618 --format={{.State.Status}}
	I1018 14:18:14.469325  104986 ssh_runner.go:195] Run: systemctl --version
	I1018 14:18:14.469386  104986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493618
	I1018 14:18:14.492790  104986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/addons-493618/id_rsa Username:docker}
	I1018 14:18:14.595298  104986 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 14:18:14.595400  104986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 14:18:14.627303  104986 cri.go:89] found id: "fcb7161ee1d1b988b7ab1002f35293f0b7091c9bac0d7bde7e8471426df926cf"
	I1018 14:18:14.627324  104986 cri.go:89] found id: "8f357a51c6b5dca76ccd31166d38783da025c5cbb4407847372de403612f8902"
	I1018 14:18:14.627328  104986 cri.go:89] found id: "530e145d6c2e072e1f708ba6909c358faf2489d1fb8aa08aa89f54faf841dbbf"
	I1018 14:18:14.627331  104986 cri.go:89] found id: "84cd4c11831db761e4325adee4879bf6a046512882f96bb1ca62c4882e6c8939"
	I1018 14:18:14.627333  104986 cri.go:89] found id: "10ae25ecd1d9000b05b1943468a2e37d7d7ad234a9cbda7cef9a4926bc9a192c"
	I1018 14:18:14.627336  104986 cri.go:89] found id: "50a19f5b596d4f200db4d0ab0cd9f04fbb94a5ef09f512aeefc81c7d4920b424"
	I1018 14:18:14.627338  104986 cri.go:89] found id: "859d5d72eef12d0f592a58b42cf0be1853b423f2c91a731d854b74ff79470e8f"
	I1018 14:18:14.627341  104986 cri.go:89] found id: "78aea4ac76ed251743d7541eaa9c63acd9c8c2af1858f2862ef1bb654b9423b3"
	I1018 14:18:14.627343  104986 cri.go:89] found id: "775733aea8bf0d456fffa861d7bbf1e97ecc24f79c468dcc0c8f760bfe0b6df0"
	I1018 14:18:14.627349  104986 cri.go:89] found id: "6673efa0776562914d4e7c35e4a4a121d60c7796edfc736b01120598fdf6dfda"
	I1018 14:18:14.627352  104986 cri.go:89] found id: "89679d50a3910712c81dd83e3ab5d310452f13a23c0dce3d8afeb4abec58b99f"
	I1018 14:18:14.627355  104986 cri.go:89] found id: "c52d44cde4f7100b3b7100db0eda92a6716001140bcb111fc1b8bad7bcffcd87"
	I1018 14:18:14.627362  104986 cri.go:89] found id: "92ceaca691f5147122ab362b3936a7201bede1ae2ac6288d04e5d0641f150e2f"
	I1018 14:18:14.627366  104986 cri.go:89] found id: "da0ddb2d0550b3553349e63e5796b3d59ac8c5a7d574c962118808b4750f4934"
	I1018 14:18:14.627368  104986 cri.go:89] found id: "a51f3eea295020493bcd77872d74756d33249e7b25591c8967346f770e49a885"
	I1018 14:18:14.627374  104986 cri.go:89] found id: "ca1869e801d6ed194d599a52234484412f6d9de897b6eadae30ca18325c92762"
	I1018 14:18:14.627377  104986 cri.go:89] found id: "7fc1c430e912b6e6881233a624c392cae950a23559653e00615374c6bb3998ca"
	I1018 14:18:14.627381  104986 cri.go:89] found id: "d41651660ae84d5b4538b902da8dd53b5d12fcc418444f71dec93c2d55fecd6e"
	I1018 14:18:14.627383  104986 cri.go:89] found id: "778f4f35207fc32858e337c8dfcb9db8dd2800c065d4cd3011abe071979a7750"
	I1018 14:18:14.627386  104986 cri.go:89] found id: "fc19fe3563e015b938b7ac91ba43ce9296326103f0946aacb2edf978e13652fa"
	I1018 14:18:14.627390  104986 cri.go:89] found id: "f616a2d4df678ba287c978ccadc5d2f4b27aa490402be790d3139b7c782e45d4"
	I1018 14:18:14.627393  104986 cri.go:89] found id: "411a5716e9150db5417f419cfe20d7e4fe1975fa2a6101bd9603be2dd22686a5"
	I1018 14:18:14.627395  104986 cri.go:89] found id: "857014c2e77ee6f8d6897bba446b3b50691d3779802c16bf95b1990d242b91a1"
	I1018 14:18:14.627397  104986 cri.go:89] found id: "aa8c1cbd9ac9c68beb713a4a22dceecc0ebd1333069b8b423674648ee1438fe8"
	I1018 14:18:14.627399  104986 cri.go:89] found id: ""
	I1018 14:18:14.627436  104986 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 14:18:14.643100  104986 out.go:203] 
	W1018 14:18:14.644593  104986 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:18:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 14:18:14.644633  104986 out.go:285] * 
	* 
	W1018 14:18:14.649902  104986 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 14:18:14.651322  104986 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-493618 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.28s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-823635 --alsologtostderr -v=1]
E1018 14:33:26.056880   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-823635 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-823635 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-823635 --alsologtostderr -v=1] stderr:
I1018 14:33:20.049658  137712 out.go:360] Setting OutFile to fd 1 ...
I1018 14:33:20.049937  137712 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:33:20.049946  137712 out.go:374] Setting ErrFile to fd 2...
I1018 14:33:20.049950  137712 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:33:20.050170  137712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
I1018 14:33:20.050437  137712 mustload.go:65] Loading cluster: functional-823635
I1018 14:33:20.050863  137712 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:33:20.051435  137712 cli_runner.go:164] Run: docker container inspect functional-823635 --format={{.State.Status}}
I1018 14:33:20.070104  137712 host.go:66] Checking if "functional-823635" exists ...
I1018 14:33:20.070390  137712 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1018 14:33:20.129903  137712 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-18 14:33:20.119583911 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1018 14:33:20.130120  137712 api_server.go:166] Checking apiserver status ...
I1018 14:33:20.130175  137712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 14:33:20.130224  137712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-823635
I1018 14:33:20.147221  137712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/functional-823635/id_rsa Username:docker}
I1018 14:33:20.247615  137712 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4191/cgroup
W1018 14:33:20.256140  137712 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4191/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1018 14:33:20.256188  137712 ssh_runner.go:195] Run: ls
I1018 14:33:20.260040  137712 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1018 14:33:20.264558  137712 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1018 14:33:20.264598  137712 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1018 14:33:20.264742  137712 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:33:20.264752  137712 addons.go:69] Setting dashboard=true in profile "functional-823635"
I1018 14:33:20.264758  137712 addons.go:238] Setting addon dashboard=true in "functional-823635"
I1018 14:33:20.264786  137712 host.go:66] Checking if "functional-823635" exists ...
I1018 14:33:20.265136  137712 cli_runner.go:164] Run: docker container inspect functional-823635 --format={{.State.Status}}
I1018 14:33:20.283848  137712 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1018 14:33:20.285016  137712 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1018 14:33:20.286195  137712 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1018 14:33:20.286217  137712 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1018 14:33:20.286285  137712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-823635
I1018 14:33:20.303485  137712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/functional-823635/id_rsa Username:docker}
I1018 14:33:20.405881  137712 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1018 14:33:20.405929  137712 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1018 14:33:20.419118  137712 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1018 14:33:20.419146  137712 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1018 14:33:20.432246  137712 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1018 14:33:20.432271  137712 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1018 14:33:20.445365  137712 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1018 14:33:20.445388  137712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1018 14:33:20.459075  137712 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1018 14:33:20.459101  137712 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1018 14:33:20.473437  137712 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1018 14:33:20.473460  137712 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1018 14:33:20.486401  137712 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1018 14:33:20.486425  137712 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1018 14:33:20.499225  137712 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1018 14:33:20.499255  137712 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1018 14:33:20.512188  137712 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1018 14:33:20.512217  137712 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1018 14:33:20.526109  137712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1018 14:33:20.956954  137712 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-823635 addons enable metrics-server

                                                
                                                
I1018 14:33:20.958171  137712 addons.go:201] Writing out "functional-823635" config to set dashboard=true...
W1018 14:33:20.958407  137712 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1018 14:33:20.959088  137712 kapi.go:59] client config for functional-823635: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.key", CAFile:"/home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1018 14:33:20.959529  137712 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1018 14:33:20.959544  137712 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1018 14:33:20.959548  137712 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1018 14:33:20.959554  137712 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1018 14:33:20.959560  137712 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1018 14:33:20.968095  137712 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  ff6fca55-42e6-42e9-8257-4c51733ea3fb 1111 0 2025-10-18 14:33:20 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-18 14:33:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.98.218.253,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.98.218.253],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1018 14:33:20.968228  137712 out.go:285] * Launching proxy ...
* Launching proxy ...
I1018 14:33:20.968290  137712 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-823635 proxy --port 36195]
I1018 14:33:20.968623  137712 dashboard.go:157] Waiting for kubectl to output host:port ...
I1018 14:33:21.013812  137712 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1018 14:33:21.013885  137712 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1018 14:33:21.022309  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c9215801-aa61-42a2-9646-2e0292e56dac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc000857080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000341680 TLS:<nil>}
I1018 14:33:21.022390  137712 retry.go:31] will retry after 133.067µs: Temporary Error: unexpected response code: 503
I1018 14:33:21.026017  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d22fd51b-bf1f-43b7-bae7-1a9f1afbd331] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc000806480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e83c0 TLS:<nil>}
I1018 14:33:21.026106  137712 retry.go:31] will retry after 221.33µs: Temporary Error: unexpected response code: 503
I1018 14:33:21.029369  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b75f5b7b-531a-4ef0-be09-e1e9e06ab950] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc000857200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000758000 TLS:<nil>}
I1018 14:33:21.029428  137712 retry.go:31] will retry after 121.965µs: Temporary Error: unexpected response code: 503
I1018 14:33:21.032608  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8d9a607b-c182-471d-a5ed-6a354e50541d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc001747a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8640 TLS:<nil>}
I1018 14:33:21.032656  137712 retry.go:31] will retry after 487.232µs: Temporary Error: unexpected response code: 503
I1018 14:33:21.035609  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[281abebf-5e33-44f2-8bdf-55f2fd0debc6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc0008065c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003417c0 TLS:<nil>}
I1018 14:33:21.035654  137712 retry.go:31] will retry after 526.307µs: Temporary Error: unexpected response code: 503
I1018 14:33:21.038606  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[904c7188-3b42-42fb-8603-003418cfa50d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc001747b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000758140 TLS:<nil>}
I1018 14:33:21.038648  137712 retry.go:31] will retry after 1.053452ms: Temporary Error: unexpected response code: 503
I1018 14:33:21.041651  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[306e8dab-c4fe-463f-bd49-22f968c49cd1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc000857300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000341b80 TLS:<nil>}
I1018 14:33:21.041687  137712 retry.go:31] will retry after 1.039539ms: Temporary Error: unexpected response code: 503
I1018 14:33:21.045067  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c18962ca-c30a-4842-a2bd-ce15081c28b9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc0008066c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8780 TLS:<nil>}
I1018 14:33:21.045115  137712 retry.go:31] will retry after 2.144761ms: Temporary Error: unexpected response code: 503
I1018 14:33:21.050458  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[acbecb10-9be7-4418-aab5-0503aba4c095] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc001747c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000758280 TLS:<nil>}
I1018 14:33:21.050506  137712 retry.go:31] will retry after 3.71302ms: Temporary Error: unexpected response code: 503
I1018 14:33:21.056953  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9dbb0e1f-8d18-4360-94d6-1e18c4cb6f2b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc000857440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000341cc0 TLS:<nil>}
I1018 14:33:21.056993  137712 retry.go:31] will retry after 2.470094ms: Temporary Error: unexpected response code: 503
I1018 14:33:21.062416  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[599899fd-f0e7-4134-9d74-8b89a54eb2a2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc000857540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e88c0 TLS:<nil>}
I1018 14:33:21.062459  137712 retry.go:31] will retry after 7.681469ms: Temporary Error: unexpected response code: 503
I1018 14:33:21.073172  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[20f42a49-25b4-4047-833a-be3c27698d5c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc000806800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8a00 TLS:<nil>}
I1018 14:33:21.073244  137712 retry.go:31] will retry after 8.769107ms: Temporary Error: unexpected response code: 503
I1018 14:33:21.085123  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[695dbcc9-0398-4b5b-9f33-dd4ff8a454f8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc001747d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007583c0 TLS:<nil>}
I1018 14:33:21.085179  137712 retry.go:31] will retry after 18.045791ms: Temporary Error: unexpected response code: 503
I1018 14:33:21.106284  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[710c6ff9-4d1d-485c-ba28-34d4d37ae7a5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc000857640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000341e00 TLS:<nil>}
I1018 14:33:21.106368  137712 retry.go:31] will retry after 19.133762ms: Temporary Error: unexpected response code: 503
I1018 14:33:21.129499  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[238ce3a8-d399-4544-8b63-a216bbbe21bd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc001747e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8b40 TLS:<nil>}
I1018 14:33:21.129559  137712 retry.go:31] will retry after 37.53447ms: Temporary Error: unexpected response code: 503
I1018 14:33:21.170497  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[00db48a0-6a22-4ba9-959d-e5ba4a2c87eb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc000806900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017e6000 TLS:<nil>}
I1018 14:33:21.170571  137712 retry.go:31] will retry after 58.578451ms: Temporary Error: unexpected response code: 503
I1018 14:33:21.233168  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[01386be5-d7f3-4fd2-8046-0087717cc02b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc000806980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017e6140 TLS:<nil>}
I1018 14:33:21.233243  137712 retry.go:31] will retry after 86.316215ms: Temporary Error: unexpected response code: 503
I1018 14:33:21.323634  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c594bd7f-2f20-41aa-822c-df84512a3927] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc000806a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000758500 TLS:<nil>}
I1018 14:33:21.323695  137712 retry.go:31] will retry after 82.400363ms: Temporary Error: unexpected response code: 503
I1018 14:33:21.410192  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a632da30-b1ff-48d2-b0fb-49766c2e4e93] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc0017fa040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000758640 TLS:<nil>}
I1018 14:33:21.410253  137712 retry.go:31] will retry after 92.846168ms: Temporary Error: unexpected response code: 503
I1018 14:33:21.506688  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e30b65c5-4663-47b7-ba38-63810ab66900] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc000857800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017e6280 TLS:<nil>}
I1018 14:33:21.506752  137712 retry.go:31] will retry after 174.286578ms: Temporary Error: unexpected response code: 503
I1018 14:33:21.685181  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7a14004b-1dc3-4aa3-adf3-9c81786d7cc0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc000806b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8c80 TLS:<nil>}
I1018 14:33:21.685254  137712 retry.go:31] will retry after 235.19091ms: Temporary Error: unexpected response code: 503
I1018 14:33:21.923131  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[acec321b-20fb-4e18-8508-c37ef80c6b9c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:21 GMT]] Body:0xc000857900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000758780 TLS:<nil>}
I1018 14:33:21.923206  137712 retry.go:31] will retry after 330.442108ms: Temporary Error: unexpected response code: 503
I1018 14:33:22.257741  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4e85e0db-8d86-406a-9c01-f96cac6703f6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:22 GMT]] Body:0xc0017fa100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8dc0 TLS:<nil>}
I1018 14:33:22.257802  137712 retry.go:31] will retry after 707.245587ms: Temporary Error: unexpected response code: 503
I1018 14:33:22.968710  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4ccd65ac-8fa8-49f5-a987-26287536c207] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:22 GMT]] Body:0xc000857a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017e63c0 TLS:<nil>}
I1018 14:33:22.968792  137712 retry.go:31] will retry after 1.076533601s: Temporary Error: unexpected response code: 503
I1018 14:33:24.048997  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4313964c-72dd-422f-9d8d-12d55a0971f4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:24 GMT]] Body:0xc0017fa200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8f00 TLS:<nil>}
I1018 14:33:24.049068  137712 retry.go:31] will retry after 1.964297136s: Temporary Error: unexpected response code: 503
I1018 14:33:26.017508  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b84dac74-7090-496d-bae3-4ef0c48388c0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:26 GMT]] Body:0xc000806c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017e6500 TLS:<nil>}
I1018 14:33:26.017594  137712 retry.go:31] will retry after 3.204980583s: Temporary Error: unexpected response code: 503
I1018 14:33:29.227750  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dd3dc4b3-9f3f-4022-9310-5ac7141aa4e1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:29 GMT]] Body:0xc000857b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007588c0 TLS:<nil>}
I1018 14:33:29.227847  137712 retry.go:31] will retry after 5.511426677s: Temporary Error: unexpected response code: 503
I1018 14:33:34.745832  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e2d34995-fc4b-44e1-9d7e-572393a7a058] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:34 GMT]] Body:0xc0017fa300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000758a00 TLS:<nil>}
I1018 14:33:34.745896  137712 retry.go:31] will retry after 8.51877091s: Temporary Error: unexpected response code: 503
I1018 14:33:43.269568  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7156df72-c201-44d3-a4c3-8b61be0d31e7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:43 GMT]] Body:0xc000857c40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017e6640 TLS:<nil>}
I1018 14:33:43.269630  137712 retry.go:31] will retry after 9.00439829s: Temporary Error: unexpected response code: 503
I1018 14:33:52.278158  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1bf9480a-cf08-4bf9-9bd4-969ae051bae2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:33:52 GMT]] Body:0xc0017fa400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e9180 TLS:<nil>}
I1018 14:33:52.278246  137712 retry.go:31] will retry after 15.061678817s: Temporary Error: unexpected response code: 503
I1018 14:34:07.344207  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0d3d3751-a39e-488f-a42c-76a87b84ce20] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:34:07 GMT]] Body:0xc000857d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000758b40 TLS:<nil>}
I1018 14:34:07.344288  137712 retry.go:31] will retry after 13.447495983s: Temporary Error: unexpected response code: 503
I1018 14:34:20.795729  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1dd6ee06-71ba-4f2e-9b52-8d3c408553dd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:34:20 GMT]] Body:0xc0017fa480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e92c0 TLS:<nil>}
I1018 14:34:20.795819  137712 retry.go:31] will retry after 15.143352749s: Temporary Error: unexpected response code: 503
I1018 14:34:35.942870  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d8d18eb5-592b-4664-877c-a0259be27dbe] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:34:35 GMT]] Body:0xc0017fa500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000758c80 TLS:<nil>}
I1018 14:34:35.942959  137712 retry.go:31] will retry after 31.998897674s: Temporary Error: unexpected response code: 503
I1018 14:35:07.948185  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a2c227db-6b77-44c6-8904-d77fba2dc0d6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:35:07 GMT]] Body:0xc0017fa580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000758dc0 TLS:<nil>}
I1018 14:35:07.948251  137712 retry.go:31] will retry after 36.531543695s: Temporary Error: unexpected response code: 503
I1018 14:35:44.485616  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7020a196-8a03-4b9b-a45c-39084ebc3d87] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:35:44 GMT]] Body:0xc0017fa080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8140 TLS:<nil>}
I1018 14:35:44.485701  137712 retry.go:31] will retry after 1m0.872424658s: Temporary Error: unexpected response code: 503
I1018 14:36:45.361613  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4c0faa84-391e-45af-97e0-2ab0f47f0fce] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:36:45 GMT]] Body:0xc00051e300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e9680 TLS:<nil>}
I1018 14:36:45.361692  137712 retry.go:31] will retry after 34.391759862s: Temporary Error: unexpected response code: 503
I1018 14:37:19.756976  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a436376f-22df-46a3-b142-6462077707b1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:37:19 GMT]] Body:0xc00051e380 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017e6780 TLS:<nil>}
I1018 14:37:19.757049  137712 retry.go:31] will retry after 50.795186361s: Temporary Error: unexpected response code: 503
I1018 14:38:10.555510  137712 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c15bf3a6-1ed5-41c7-9c70-82a25128aa79] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 14:38:10 GMT]] Body:0xc0017fa040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017e68c0 TLS:<nil>}
I1018 14:38:10.555595  137712 retry.go:31] will retry after 30.399741485s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-823635
helpers_test.go:243: (dbg) docker inspect functional-823635:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e",
	        "Created": "2025-10-18T14:26:18.436737288Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 120898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T14:26:18.470822803Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e/hostname",
	        "HostsPath": "/var/lib/docker/containers/0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e/hosts",
	        "LogPath": "/var/lib/docker/containers/0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e/0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e-json.log",
	        "Name": "/functional-823635",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-823635:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-823635",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e",
	                "LowerDir": "/var/lib/docker/overlay2/5082ed7a7d4f3cdf2f9271d923fd3b0d056c6762d4c76a1ba4517906c5b1b1bf-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5082ed7a7d4f3cdf2f9271d923fd3b0d056c6762d4c76a1ba4517906c5b1b1bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5082ed7a7d4f3cdf2f9271d923fd3b0d056c6762d4c76a1ba4517906c5b1b1bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5082ed7a7d4f3cdf2f9271d923fd3b0d056c6762d4c76a1ba4517906c5b1b1bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-823635",
	                "Source": "/var/lib/docker/volumes/functional-823635/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-823635",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-823635",
	                "name.minikube.sigs.k8s.io": "functional-823635",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fd03071fcaf4a0d71521bffa1eb6767116fc7bde333deaa49c9042ef66155301",
	            "SandboxKey": "/var/run/docker/netns/fd03071fcaf4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-823635": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:17:41:74:60:89",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ac30e007c0f5497eef211625aeba4ddabc991ddfbfb64985fe205fdaca6d7800",
	                    "EndpointID": "70ea1dea80c293bd2b5dcbe0155d3887dc026762d0c6aed349ff54f357d2d760",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-823635",
	                        "0cd7caf20b47"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-823635 -n functional-823635
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-823635 logs -n 25: (1.252906537s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-823635 ssh sudo systemctl is-active containerd                                                                                                       │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │                     │
	│ image          │ functional-823635 image load --daemon kicbase/echo-server:functional-823635 --alsologtostderr                                                                   │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image ls                                                                                                                                      │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image load --daemon kicbase/echo-server:functional-823635 --alsologtostderr                                                                   │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image ls                                                                                                                                      │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image load --daemon kicbase/echo-server:functional-823635 --alsologtostderr                                                                   │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image ls                                                                                                                                      │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image save kicbase/echo-server:functional-823635 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image rm kicbase/echo-server:functional-823635 --alsologtostderr                                                                              │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image ls                                                                                                                                      │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image save --daemon kicbase/echo-server:functional-823635 --alsologtostderr                                                                   │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ start          │ -p functional-823635 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │                     │
	│ start          │ -p functional-823635 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                 │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-823635 --alsologtostderr -v=1                                                                                                  │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │                     │
	│ update-context │ functional-823635 update-context --alsologtostderr -v=2                                                                                                         │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ update-context │ functional-823635 update-context --alsologtostderr -v=2                                                                                                         │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ update-context │ functional-823635 update-context --alsologtostderr -v=2                                                                                                         │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-823635 image ls --format short --alsologtostderr                                                                                                     │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-823635 image ls --format json --alsologtostderr                                                                                                      │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-823635 image ls --format table --alsologtostderr                                                                                                     │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-823635 image ls --format yaml --alsologtostderr                                                                                                      │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ ssh            │ functional-823635 ssh pgrep buildkitd                                                                                                                           │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │                     │
	│ image          │ functional-823635 image build -t localhost/my-image:functional-823635 testdata/build --alsologtostderr                                                          │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-823635 image ls                                                                                                                                      │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 14:33:19
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 14:33:19.837101  137575 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:33:19.837214  137575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:33:19.837219  137575 out.go:374] Setting ErrFile to fd 2...
	I1018 14:33:19.837224  137575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:33:19.837429  137575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:33:19.837864  137575 out.go:368] Setting JSON to false
	I1018 14:33:19.838774  137575 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":8151,"bootTime":1760789849,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:33:19.838873  137575 start.go:141] virtualization: kvm guest
	I1018 14:33:19.840818  137575 out.go:179] * [functional-823635] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 14:33:19.842355  137575 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 14:33:19.842339  137575 notify.go:220] Checking for updates...
	I1018 14:33:19.843868  137575 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:33:19.845097  137575 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 14:33:19.846215  137575 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 14:33:19.847364  137575 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 14:33:19.848501  137575 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 14:33:19.850035  137575 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:33:19.850568  137575 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:33:19.873951  137575 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 14:33:19.874066  137575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:33:19.933740  137575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-18 14:33:19.923050776 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:33:19.933880  137575 docker.go:318] overlay module found
	I1018 14:33:19.935576  137575 out.go:179] * Using the docker driver based on existing profile
	I1018 14:33:19.936824  137575 start.go:305] selected driver: docker
	I1018 14:33:19.936841  137575 start.go:925] validating driver "docker" against &{Name:functional-823635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-823635 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:33:19.936968  137575 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 14:33:19.937071  137575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:33:19.996183  137575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-18 14:33:19.986250339 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:33:19.996789  137575 cni.go:84] Creating CNI manager for ""
	I1018 14:33:19.996857  137575 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 14:33:19.996897  137575 start.go:349] cluster config:
	{Name:functional-823635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-823635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:33:19.999720  137575 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 18 14:36:08 functional-823635 crio[3554]: time="2025-10-18T14:36:08.780946383Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=441f3caf-7c91-4eb0-8280-207debdf2851 name=/runtime.v1.ImageService/PullImage
	Oct 18 14:36:08 functional-823635 crio[3554]: time="2025-10-18T14:36:08.785021364Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 18 14:36:09 functional-823635 crio[3554]: time="2025-10-18T14:36:09.406753044Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=faa1823a-ffe8-49f6-8823-85667f3065be name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:36:09 functional-823635 crio[3554]: time="2025-10-18T14:36:09.407026642Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=faa1823a-ffe8-49f6-8823-85667f3065be name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:36:09 functional-823635 crio[3554]: time="2025-10-18T14:36:09.407090918Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=faa1823a-ffe8-49f6-8823-85667f3065be name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:36:20 functional-823635 crio[3554]: time="2025-10-18T14:36:20.996810411Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=05621c5a-bd6a-4e1d-8121-6fa9c40779a6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:36:20 functional-823635 crio[3554]: time="2025-10-18T14:36:20.997050973Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=05621c5a-bd6a-4e1d-8121-6fa9c40779a6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:36:20 functional-823635 crio[3554]: time="2025-10-18T14:36:20.997113229Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=05621c5a-bd6a-4e1d-8121-6fa9c40779a6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:36:40 functional-823635 crio[3554]: time="2025-10-18T14:36:40.113621274Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 18 14:36:55 functional-823635 crio[3554]: time="2025-10-18T14:36:55.357678257Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c6f98c45-d0c3-4493-8485-7793a8bb0b13 name=/runtime.v1.ImageService/PullImage
	Oct 18 14:36:55 functional-823635 crio[3554]: time="2025-10-18T14:36:55.35858914Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=04c2393e-30f0-402e-88e3-4852cf44a0c0 name=/runtime.v1.ImageService/PullImage
	Oct 18 14:36:55 functional-823635 crio[3554]: time="2025-10-18T14:36:55.35937317Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=bcf2188e-a529-4e19-b913-69f8c2a5289e name=/runtime.v1.ImageService/PullImage
	Oct 18 14:36:55 functional-823635 crio[3554]: time="2025-10-18T14:36:55.362536303Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 18 14:36:55 functional-823635 crio[3554]: time="2025-10-18T14:36:55.52587035Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=aa31b427-6590-4d9d-9ab9-7428f5a7b27a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:36:55 functional-823635 crio[3554]: time="2025-10-18T14:36:55.526105329Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=aa31b427-6590-4d9d-9ab9-7428f5a7b27a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:36:55 functional-823635 crio[3554]: time="2025-10-18T14:36:55.526163686Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 found" id=aa31b427-6590-4d9d-9ab9-7428f5a7b27a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:37:09 functional-823635 crio[3554]: time="2025-10-18T14:37:09.996569187Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a51175ed-02c9-412f-a456-789d3b7a360c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:37:09 functional-823635 crio[3554]: time="2025-10-18T14:37:09.996786085Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=a51175ed-02c9-412f-a456-789d3b7a360c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:37:09 functional-823635 crio[3554]: time="2025-10-18T14:37:09.996855974Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 found" id=a51175ed-02c9-412f-a456-789d3b7a360c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:37:26 functional-823635 crio[3554]: time="2025-10-18T14:37:26.691518379Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 18 14:37:58 functional-823635 crio[3554]: time="2025-10-18T14:37:58.024169154Z" level=info msg="Pulling image: docker.io/nginx:latest" id=d8b29e89-6361-4b6a-bd64-5b486bfcd06c name=/runtime.v1.ImageService/PullImage
	Oct 18 14:37:58 functional-823635 crio[3554]: time="2025-10-18T14:37:58.027458984Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 18 14:38:09 functional-823635 crio[3554]: time="2025-10-18T14:38:09.996410436Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=3299f41a-45ff-448a-b31a-f9878f764866 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:38:09 functional-823635 crio[3554]: time="2025-10-18T14:38:09.996555181Z" level=info msg="Image docker.io/nginx:alpine not found" id=3299f41a-45ff-448a-b31a-f9878f764866 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:38:09 functional-823635 crio[3554]: time="2025-10-18T14:38:09.996590105Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=3299f41a-45ff-448a-b31a-f9878f764866 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	dd337385fed92       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 minutes ago       Exited              mount-munger              0                   28fbf2e5ed90a       busybox-mount                               default
	93e37f902f52d       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da       10 minutes ago      Running             mysql                     0                   7138c0b9f3baa       mysql-5bb876957f-8kx2d                      default
	f4f02a115fe05       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      10 minutes ago      Running             kube-apiserver            0                   521e60ec1c4f9       kube-apiserver-functional-823635            kube-system
	510ec089b935b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      1                   fb579c4379df7       etcd-functional-823635                      kube-system
	480b306de65e2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      10 minutes ago      Running             kube-controller-manager   2                   3c1e4cb8043c7       kube-controller-manager-functional-823635   kube-system
	65a173c29bb11       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      11 minutes ago      Running             kube-scheduler            1                   bab606ab6f35c       kube-scheduler-functional-823635            kube-system
	c89c1234ce311       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      11 minutes ago      Exited              kube-controller-manager   1                   3c1e4cb8043c7       kube-controller-manager-functional-823635   kube-system
	b42a28b08255a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      11 minutes ago      Running             kindnet-cni               1                   8bd811be0c01b       kindnet-stt2s                               kube-system
	0973b81bb4630       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Running             storage-provisioner       1                   717cdc288a802       storage-provisioner                         kube-system
	aa85de7b0ebd5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Running             coredns                   1                   fbb2b9970544a       coredns-66bc5c9577-zdmkg                    kube-system
	0caa23e95037b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      11 minutes ago      Running             kube-proxy                1                   a42ae80a6ca10       kube-proxy-b9mv2                            kube-system
	84e8835721763       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   0                   fbb2b9970544a       coredns-66bc5c9577-zdmkg                    kube-system
	79e2f9ff59d01       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       0                   717cdc288a802       storage-provisioner                         kube-system
	c98f6eb04b82a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      11 minutes ago      Exited              kindnet-cni               0                   8bd811be0c01b       kindnet-stt2s                               kube-system
	089827e2b297a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      11 minutes ago      Exited              kube-proxy                0                   a42ae80a6ca10       kube-proxy-b9mv2                            kube-system
	50c2de213fe05       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      11 minutes ago      Exited              kube-scheduler            0                   bab606ab6f35c       kube-scheduler-functional-823635            kube-system
	4e3e13eed9434       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Exited              etcd                      0                   fb579c4379df7       etcd-functional-823635                      kube-system
	
	
	==> coredns [84e8835721763a112dee2effc0c878e7ded9cfb104b777493d0895f93b72052a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37458 - 10645 "HINFO IN 7231688986531392643.6176522986195705811. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.141536718s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [aa85de7b0ebd51c3eafa74cf48260bda3fc8d2bb6a5326417290a56f26baf88d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38471 - 27374 "HINFO IN 7208865881712799088.8543358163044453170. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.104517016s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-823635
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-823635
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=functional-823635
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T14_26_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 14:26:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-823635
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 14:38:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 14:38:11 +0000   Sat, 18 Oct 2025 14:26:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 14:38:11 +0000   Sat, 18 Oct 2025 14:26:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 14:38:11 +0000   Sat, 18 Oct 2025 14:26:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 14:38:11 +0000   Sat, 18 Oct 2025 14:26:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-823635
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                c89c3cea-f79f-4b3e-bfa3-34b778dae193
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-kvjxl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	  default                     hello-node-connect-7d85dfc575-w6k84           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-8kx2d                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-zdmkg                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-823635                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-stt2s                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-823635              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-823635     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-b9mv2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-823635              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-xpl9m    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-c7xfp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-823635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-823635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-823635 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-823635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-823635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-823635 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-823635 event: Registered Node functional-823635 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-823635 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-823635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-823635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-823635 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-823635 event: Registered Node functional-823635 in Controller
	
	
	==> dmesg <==
	[  +0.096767] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026410] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.055938] kauditd_printk_skb: 47 callbacks suppressed
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [4e3e13eed943414a9ff5b1ecd1312e5c7eb4abbb35998a5258ffe489435019e7] <==
	{"level":"warn","ts":"2025-10-18T14:26:28.755035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:26:28.761237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:26:28.767616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:26:28.789314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:26:28.795391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:26:28.801751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:26:28.847994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53618","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T14:27:25.470976Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T14:27:25.471126Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-823635","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-18T14:27:25.471288Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T14:27:25.472865Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T14:27:25.472938Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:27:25.472958Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-18T14:27:25.473015Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T14:27:25.473030Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T14:27:25.473039Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:27:25.473053Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-18T14:27:25.473051Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-18T14:27:25.473080Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T14:27:25.473131Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T14:27:25.473146Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:27:25.475024Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-18T14:27:25.475099Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:27:25.475121Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-18T14:27:25.475126Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-823635","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [510ec089b935bcedf7d7d5aaaeac3889348d081ccb8cb04c9f0ac6b07b07ade4] <==
	{"level":"warn","ts":"2025-10-18T14:27:28.698601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.704720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.711243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.718181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.725019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.731605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.740539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.749051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.754942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.760868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.767374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.779310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.785480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.791363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.798293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.804427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.810389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.816761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.827965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.833772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.839574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.890018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47754","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T14:37:28.429191Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1013}
	{"level":"info","ts":"2025-10-18T14:37:28.448766Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1013,"took":"19.143375ms","hash":179156050,"current-db-size-bytes":3440640,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1540096,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-18T14:37:28.448812Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":179156050,"revision":1013,"compact-revision":-1}
	
	
	==> kernel <==
	 14:38:21 up  2:20,  0 user,  load average: 0.30, 0.26, 0.98
	Linux functional-823635 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b42a28b08255a3bef55d5fa86d732fafa60fa297ac485543f4dade8ec44bc21d] <==
	I1018 14:36:15.402371       1 main.go:301] handling current node
	I1018 14:36:25.399355       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:36:25.399395       1 main.go:301] handling current node
	I1018 14:36:35.402164       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:36:35.402197       1 main.go:301] handling current node
	I1018 14:36:45.397508       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:36:45.397550       1 main.go:301] handling current node
	I1018 14:36:55.398529       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:36:55.398563       1 main.go:301] handling current node
	I1018 14:37:05.401713       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:37:05.401749       1 main.go:301] handling current node
	I1018 14:37:15.401062       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:37:15.401098       1 main.go:301] handling current node
	I1018 14:37:25.399981       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:37:25.400016       1 main.go:301] handling current node
	I1018 14:37:35.402116       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:37:35.402153       1 main.go:301] handling current node
	I1018 14:37:45.401147       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:37:45.401201       1 main.go:301] handling current node
	I1018 14:37:55.399999       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:37:55.400056       1 main.go:301] handling current node
	I1018 14:38:05.400669       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:38:05.400705       1 main.go:301] handling current node
	I1018 14:38:15.400974       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:38:15.401007       1 main.go:301] handling current node
	
	
	==> kindnet [c98f6eb04b82aa0b7cc5310c19c7a42d5e988cb5dec7981b768a563ad8848a4a] <==
	I1018 14:26:38.183058       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 14:26:38.183348       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1018 14:26:38.183496       1 main.go:148] setting mtu 1500 for CNI 
	I1018 14:26:38.183512       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 14:26:38.183530       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T14:26:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 14:26:38.383580       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 14:26:38.384446       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 14:26:38.384488       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 14:26:38.384694       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 14:26:38.685469       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 14:26:38.685495       1 metrics.go:72] Registering metrics
	I1018 14:26:38.685541       1 controller.go:711] "Syncing nftables rules"
	I1018 14:26:48.387426       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:26:48.387493       1 main.go:301] handling current node
	I1018 14:26:58.391538       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:26:58.391584       1 main.go:301] handling current node
	I1018 14:27:08.388039       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:27:08.388070       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f4f02a115fe053e62f50a9933a8129933b890f1ad2341770ee4e1d3c244922a5] <==
	I1018 14:27:29.347489       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 14:27:29.365958       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 14:27:30.074021       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 14:27:30.235431       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1018 14:27:30.441146       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1018 14:27:30.442363       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 14:27:30.446356       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 14:27:30.830119       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 14:27:30.922433       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 14:27:30.969092       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 14:27:30.974118       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 14:27:35.090624       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 14:27:49.633545       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.137.242"}
	I1018 14:27:53.926329       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.47.131"}
	I1018 14:27:56.292249       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.212.182"}
	E1018 14:28:07.101429       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37014: use of closed network connection
	E1018 14:28:08.077420       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37036: use of closed network connection
	E1018 14:28:09.467461       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37056: use of closed network connection
	E1018 14:28:10.923014       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37066: use of closed network connection
	I1018 14:28:11.184020       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.236.205"}
	I1018 14:29:12.558896       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.86.84"}
	I1018 14:33:20.830094       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 14:33:20.936844       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.218.253"}
	I1018 14:33:20.949394       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.153.159"}
	I1018 14:37:29.249837       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [480b306de65e2b064686a5b46763f65b2d0c2ca241a51b9c215b6d051ec2a38d] <==
	I1018 14:27:32.679137       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 14:27:32.679066       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 14:27:32.679197       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 14:27:32.679077       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 14:27:32.680438       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 14:27:32.684675       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 14:27:32.684675       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 14:27:32.697837       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 14:27:32.697890       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 14:27:32.697924       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 14:27:32.697929       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 14:27:32.697933       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 14:27:32.699198       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:27:32.700246       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 14:27:32.700352       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 14:27:32.700442       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-823635"
	I1018 14:27:32.700494       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 14:27:32.702592       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 14:27:32.704881       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	E1018 14:33:20.879466       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:33:20.883478       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:33:20.886986       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:33:20.887939       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:33:20.889983       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:33:20.895450       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [c89c1234ce311e4076f08ce6733d8b7750437cd52cbbc34f8bb6350e20808e1b] <==
	I1018 14:27:15.230960       1 serving.go:386] Generated self-signed cert in-memory
	I1018 14:27:15.458255       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 14:27:15.458276       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:27:15.459506       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 14:27:15.459523       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1018 14:27:15.459855       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 14:27:15.459909       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 14:27:25.462220       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [089827e2b297a614c2c88054a3fe7135ff5aeb3e0210cb68fc9558b8469187ec] <==
	I1018 14:26:38.030805       1 server_linux.go:53] "Using iptables proxy"
	I1018 14:26:38.099565       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 14:26:38.200564       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:26:38.200602       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 14:26:38.200680       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:26:38.219018       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 14:26:38.219062       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:26:38.224155       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:26:38.224526       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:26:38.224562       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:26:38.225873       1 config.go:200] "Starting service config controller"
	I1018 14:26:38.225886       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:26:38.225902       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:26:38.225908       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:26:38.225988       1 config.go:309] "Starting node config controller"
	I1018 14:26:38.225995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:26:38.225985       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:26:38.226005       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:26:38.226001       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:26:38.326096       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:26:38.326097       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 14:26:38.327398       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [0caa23e95037ba4b680939efa51d43b1deef3bdd2d7fe3bc6b60e2776dd86054] <==
	I1018 14:27:15.067811       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1018 14:27:15.068838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-823635&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:27:16.326596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-823635&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:27:19.026736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-823635&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:27:24.752138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-823635&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1018 14:27:32.168784       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:27:32.168848       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 14:27:32.169004       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:27:32.189101       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 14:27:32.189152       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:27:32.194889       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:27:32.195323       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:27:32.195370       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:27:32.196813       1 config.go:309] "Starting node config controller"
	I1018 14:27:32.196835       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:27:32.196844       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:27:32.196859       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:27:32.196864       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:27:32.196893       1 config.go:200] "Starting service config controller"
	I1018 14:27:32.196900       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:27:32.196942       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:27:32.196949       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:27:32.297496       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 14:27:32.297525       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:27:32.297519       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [50c2de213fe05086fd0f202bc87d4794e5cf06d1e90bc2b581c33039db5afeb7] <==
	E1018 14:26:29.246389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:26:29.246436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:26:29.246477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 14:26:29.246538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 14:26:29.246590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 14:26:30.050359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:26:30.096722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 14:26:30.134148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 14:26:30.273298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 14:26:30.343340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 14:26:30.365835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 14:26:30.368882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:26:30.386971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:26:30.392044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:26:30.419056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 14:26:30.430286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:26:30.484589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:26:30.501822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1018 14:26:30.844191       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:27:14.845369       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:27:14.845432       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 14:27:14.845513       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 14:27:14.845540       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 14:27:14.845547       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 14:27:14.845571       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [65a173c29bb11fa7b34e191eaa9aeef81e739e6edee0571260868bfa4f411b94] <==
	E1018 14:27:20.428735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 14:27:20.442371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:27:20.466841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 14:27:20.492518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:27:20.968943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:27:23.394557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 14:27:23.960214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 14:27:24.072090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 14:27:24.135977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:27:24.269897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:27:24.309794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 14:27:24.354393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:27:24.434450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 14:27:24.800416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 14:27:24.947637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 14:27:25.005447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:27:25.043088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:27:25.138956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 14:27:25.523395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:27:25.798266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 14:27:26.001045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 14:27:26.051712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:27:26.794417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 14:27:26.836204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1018 14:27:35.488262       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 14:36:55 functional-823635 kubelet[4121]: E1018 14:36:55.358251    4121 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 18 14:36:55 functional-823635 kubelet[4121]: E1018 14:36:55.358474    4121 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-w6k84_default(afc7f7cc-801e-451a-a546-408cff4e3833): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Oct 18 14:36:55 functional-823635 kubelet[4121]: E1018 14:36:55.358529    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w6k84" podUID="afc7f7cc-801e-451a-a546-408cff4e3833"
	Oct 18 14:36:55 functional-823635 kubelet[4121]: E1018 14:36:55.358906    4121 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 18 14:36:55 functional-823635 kubelet[4121]: E1018 14:36:55.358972    4121 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 18 14:36:55 functional-823635 kubelet[4121]: E1018 14:36:55.359167    4121 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-kvjxl_default(c0e2f316-9809-4f04-8b57-14e8eb1f0204): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Oct 18 14:36:55 functional-823635 kubelet[4121]: E1018 14:36:55.360501    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-kvjxl" podUID="c0e2f316-9809-4f04-8b57-14e8eb1f0204"
	Oct 18 14:36:55 functional-823635 kubelet[4121]: E1018 14:36:55.526476    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c7xfp" podUID="95c9f8bb-f768-4d5e-ba3b-dccc22757ed0"
	Oct 18 14:37:06 functional-823635 kubelet[4121]: E1018 14:37:06.996693    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w6k84" podUID="afc7f7cc-801e-451a-a546-408cff4e3833"
	Oct 18 14:37:09 functional-823635 kubelet[4121]: E1018 14:37:09.996416    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-kvjxl" podUID="c0e2f316-9809-4f04-8b57-14e8eb1f0204"
	Oct 18 14:37:19 functional-823635 kubelet[4121]: E1018 14:37:19.996018    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w6k84" podUID="afc7f7cc-801e-451a-a546-408cff4e3833"
	Oct 18 14:37:21 functional-823635 kubelet[4121]: E1018 14:37:21.996002    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-kvjxl" podUID="c0e2f316-9809-4f04-8b57-14e8eb1f0204"
	Oct 18 14:37:33 functional-823635 kubelet[4121]: E1018 14:37:33.996232    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-kvjxl" podUID="c0e2f316-9809-4f04-8b57-14e8eb1f0204"
	Oct 18 14:37:34 functional-823635 kubelet[4121]: E1018 14:37:34.996637    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w6k84" podUID="afc7f7cc-801e-451a-a546-408cff4e3833"
	Oct 18 14:37:46 functional-823635 kubelet[4121]: E1018 14:37:46.996881    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w6k84" podUID="afc7f7cc-801e-451a-a546-408cff4e3833"
	Oct 18 14:37:46 functional-823635 kubelet[4121]: E1018 14:37:46.997082    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-kvjxl" podUID="c0e2f316-9809-4f04-8b57-14e8eb1f0204"
	Oct 18 14:37:58 functional-823635 kubelet[4121]: E1018 14:37:58.023599    4121 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 18 14:37:58 functional-823635 kubelet[4121]: E1018 14:37:58.023670    4121 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 18 14:37:58 functional-823635 kubelet[4121]: E1018 14:37:58.024056    4121 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx-svc_default(ee39520e-b6d9-4fe9-824c-db1d0b2e661e): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 14:37:58 functional-823635 kubelet[4121]: E1018 14:37:58.024134    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ee39520e-b6d9-4fe9-824c-db1d0b2e661e"
	Oct 18 14:37:58 functional-823635 kubelet[4121]: E1018 14:37:58.997885    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w6k84" podUID="afc7f7cc-801e-451a-a546-408cff4e3833"
	Oct 18 14:37:58 functional-823635 kubelet[4121]: E1018 14:37:58.997988    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-kvjxl" podUID="c0e2f316-9809-4f04-8b57-14e8eb1f0204"
	Oct 18 14:38:09 functional-823635 kubelet[4121]: E1018 14:38:09.996962    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ee39520e-b6d9-4fe9-824c-db1d0b2e661e"
	Oct 18 14:38:10 functional-823635 kubelet[4121]: E1018 14:38:10.995702    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-kvjxl" podUID="c0e2f316-9809-4f04-8b57-14e8eb1f0204"
	Oct 18 14:38:12 functional-823635 kubelet[4121]: E1018 14:38:12.995866    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w6k84" podUID="afc7f7cc-801e-451a-a546-408cff4e3833"
	
	
	==> storage-provisioner [0973b81bb46303994a6ee55425593dc6f31b75283660128cdf4b2aca621b2db0] <==
	W1018 14:37:56.795245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:58.799232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:58.802985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:00.805766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:00.811220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:02.814527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:02.819952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:04.823090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:04.827179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:06.830485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:06.836066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:08.839280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:08.844019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:10.847043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:10.850840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:12.853868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:12.857971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:14.861699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:14.865619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:16.868678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:16.873337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:18.876735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:18.880668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:20.883101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:20.887949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [79e2f9ff59d010e391ea9ba1565688857cee3f6061e58fe816e0fd7bc5464d4c] <==
	I1018 14:26:49.443004       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-823635_bdc6698e-30cc-41ba-8640-be0d38b72921!
	W1018 14:26:51.350611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:51.356452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:53.359516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:53.363277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:55.366759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:55.371592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:57.374629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:57.378317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:59.381478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:59.385461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:01.388382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:01.392241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:03.395864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:03.400171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:05.403696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:05.407592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:07.410344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:07.415319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:09.418313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:09.422093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:11.425619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:11.431210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:13.434190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:13.437932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-823635 -n functional-823635
helpers_test.go:269: (dbg) Run:  kubectl --context functional-823635 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-kvjxl hello-node-connect-7d85dfc575-w6k84 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xpl9m kubernetes-dashboard-855c9754f9-c7xfp
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-823635 describe pod busybox-mount hello-node-75c85bcc94-kvjxl hello-node-connect-7d85dfc575-w6k84 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xpl9m kubernetes-dashboard-855c9754f9-c7xfp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-823635 describe pod busybox-mount hello-node-75c85bcc94-kvjxl hello-node-connect-7d85dfc575-w6k84 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xpl9m kubernetes-dashboard-855c9754f9-c7xfp: exit status 1 (95.22787ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-823635/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:27:57 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  cri-o://dd337385fed92e05f2a197d6d7595005b37fb5bf9065adcfef39a71b932525fe
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 18 Oct 2025 14:29:05 +0000
	      Finished:     Sat, 18 Oct 2025 14:29:05 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p6k4w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-p6k4w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  10m    default-scheduler  Successfully assigned default/busybox-mount to functional-823635
	  Normal  Pulling    10m    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m17s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.888s (1m4.677s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m17s  kubelet            Created container: mount-munger
	  Normal  Started    9m17s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-kvjxl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-823635/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:29:12 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n5b64 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n5b64:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m10s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-kvjxl to functional-823635
	  Normal   Pulling    3m50s (x4 over 9m10s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     87s (x4 over 8m12s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     87s (x4 over 8m12s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    12s (x11 over 8m11s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     12s (x11 over 8m11s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-w6k84
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-823635/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:28:11 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6szb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s6szb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w6k84 to functional-823635
	  Normal   Pulling    3m53s (x4 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     87s (x4 over 8m12s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     87s (x4 over 8m12s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    10s (x10 over 8m11s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     10s (x10 over 8m11s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-823635/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:27:56 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b946b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-b946b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/nginx-svc to functional-823635
	  Warning  Failed     4m34s                kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m46s (x4 over 10m)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     24s (x3 over 9m19s)  kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     24s (x4 over 9m19s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    13s (x6 over 9m18s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     13s (x6 over 9m18s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-823635/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:27:59 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fxjlk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-fxjlk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/sp-pod to functional-823635
	  Warning  Failed     8m12s                  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m52s                  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m31s (x3 over 8m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed     3m31s                  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m58s (x5 over 8m11s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m58s (x5 over 8m11s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m43s (x4 over 10m)    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-xpl9m" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-c7xfp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-823635 describe pod busybox-mount hello-node-75c85bcc94-kvjxl hello-node-connect-7d85dfc575-w6k84 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xpl9m kubernetes-dashboard-855c9754f9-c7xfp: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-823635 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-823635 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-w6k84" [afc7f7cc-801e-451a-a546-408cff4e3833] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1018 14:28:18.850383   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:28:39.332047   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-823635 -n functional-823635
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-18 14:38:11.503350511 +0000 UTC m=+1392.010353468
functional_test.go:1645: (dbg) Run:  kubectl --context functional-823635 describe po hello-node-connect-7d85dfc575-w6k84 -n default
functional_test.go:1645: (dbg) kubectl --context functional-823635 describe po hello-node-connect-7d85dfc575-w6k84 -n default:
Name:             hello-node-connect-7d85dfc575-w6k84
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-823635/192.168.49.2
Start Time:       Sat, 18 Oct 2025 14:28:11 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6szb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-s6szb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w6k84 to functional-823635
Normal   Pulling    3m42s (x4 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     76s (x4 over 8m1s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     76s (x4 over 8m1s)   kubelet            Error: ErrImagePull
Normal   BackOff    13s (x9 over 8m)     kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     13s (x9 over 8m)     kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-823635 logs hello-node-connect-7d85dfc575-w6k84 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-823635 logs hello-node-connect-7d85dfc575-w6k84 -n default: exit status 1 (68.771014ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-w6k84" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-823635 logs hello-node-connect-7d85dfc575-w6k84 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-823635 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-w6k84
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-823635/192.168.49.2
Start Time:       Sat, 18 Oct 2025 14:28:11 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6szb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-s6szb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w6k84 to functional-823635
Normal   Pulling    3m42s (x4 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     76s (x4 over 8m1s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     76s (x4 over 8m1s)   kubelet            Error: ErrImagePull
Normal   BackOff    13s (x9 over 8m)     kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     13s (x9 over 8m)     kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-823635 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-823635 logs -l app=hello-node-connect: exit status 1 (63.839596ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-w6k84" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-823635 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-823635 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.236.205
IPs:                      10.99.236.205
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31488/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-823635
helpers_test.go:243: (dbg) docker inspect functional-823635:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e",
	        "Created": "2025-10-18T14:26:18.436737288Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 120898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T14:26:18.470822803Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e/hostname",
	        "HostsPath": "/var/lib/docker/containers/0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e/hosts",
	        "LogPath": "/var/lib/docker/containers/0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e/0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e-json.log",
	        "Name": "/functional-823635",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-823635:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-823635",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e",
	                "LowerDir": "/var/lib/docker/overlay2/5082ed7a7d4f3cdf2f9271d923fd3b0d056c6762d4c76a1ba4517906c5b1b1bf-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5082ed7a7d4f3cdf2f9271d923fd3b0d056c6762d4c76a1ba4517906c5b1b1bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5082ed7a7d4f3cdf2f9271d923fd3b0d056c6762d4c76a1ba4517906c5b1b1bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5082ed7a7d4f3cdf2f9271d923fd3b0d056c6762d4c76a1ba4517906c5b1b1bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-823635",
	                "Source": "/var/lib/docker/volumes/functional-823635/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-823635",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-823635",
	                "name.minikube.sigs.k8s.io": "functional-823635",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fd03071fcaf4a0d71521bffa1eb6767116fc7bde333deaa49c9042ef66155301",
	            "SandboxKey": "/var/run/docker/netns/fd03071fcaf4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-823635": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:17:41:74:60:89",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ac30e007c0f5497eef211625aeba4ddabc991ddfbfb64985fe205fdaca6d7800",
	                    "EndpointID": "70ea1dea80c293bd2b5dcbe0155d3887dc026762d0c6aed349ff54f357d2d760",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-823635",
	                        "0cd7caf20b47"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-823635 -n functional-823635
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-823635 logs -n 25: (1.25669612s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-823635 ssh sudo systemctl is-active containerd                                                                                                       │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │                     │
	│ image          │ functional-823635 image load --daemon kicbase/echo-server:functional-823635 --alsologtostderr                                                                   │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image ls                                                                                                                                      │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image load --daemon kicbase/echo-server:functional-823635 --alsologtostderr                                                                   │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image ls                                                                                                                                      │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image load --daemon kicbase/echo-server:functional-823635 --alsologtostderr                                                                   │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image ls                                                                                                                                      │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image save kicbase/echo-server:functional-823635 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image rm kicbase/echo-server:functional-823635 --alsologtostderr                                                                              │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image ls                                                                                                                                      │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image          │ functional-823635 image save --daemon kicbase/echo-server:functional-823635 --alsologtostderr                                                                   │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ start          │ -p functional-823635 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │                     │
	│ start          │ -p functional-823635 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                 │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-823635 --alsologtostderr -v=1                                                                                                  │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │                     │
	│ update-context │ functional-823635 update-context --alsologtostderr -v=2                                                                                                         │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ update-context │ functional-823635 update-context --alsologtostderr -v=2                                                                                                         │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ update-context │ functional-823635 update-context --alsologtostderr -v=2                                                                                                         │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-823635 image ls --format short --alsologtostderr                                                                                                     │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-823635 image ls --format json --alsologtostderr                                                                                                      │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-823635 image ls --format table --alsologtostderr                                                                                                     │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-823635 image ls --format yaml --alsologtostderr                                                                                                      │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ ssh            │ functional-823635 ssh pgrep buildkitd                                                                                                                           │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │                     │
	│ image          │ functional-823635 image build -t localhost/my-image:functional-823635 testdata/build --alsologtostderr                                                          │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	│ image          │ functional-823635 image ls                                                                                                                                      │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:34 UTC │ 18 Oct 25 14:34 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 14:33:19
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 14:33:19.837101  137575 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:33:19.837214  137575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:33:19.837219  137575 out.go:374] Setting ErrFile to fd 2...
	I1018 14:33:19.837224  137575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:33:19.837429  137575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:33:19.837864  137575 out.go:368] Setting JSON to false
	I1018 14:33:19.838774  137575 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":8151,"bootTime":1760789849,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:33:19.838873  137575 start.go:141] virtualization: kvm guest
	I1018 14:33:19.840818  137575 out.go:179] * [functional-823635] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 14:33:19.842355  137575 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 14:33:19.842339  137575 notify.go:220] Checking for updates...
	I1018 14:33:19.843868  137575 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:33:19.845097  137575 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 14:33:19.846215  137575 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 14:33:19.847364  137575 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 14:33:19.848501  137575 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 14:33:19.850035  137575 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:33:19.850568  137575 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:33:19.873951  137575 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 14:33:19.874066  137575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:33:19.933740  137575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-18 14:33:19.923050776 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:33:19.933880  137575 docker.go:318] overlay module found
	I1018 14:33:19.935576  137575 out.go:179] * Using the docker driver based on existing profile
	I1018 14:33:19.936824  137575 start.go:305] selected driver: docker
	I1018 14:33:19.936841  137575 start.go:925] validating driver "docker" against &{Name:functional-823635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-823635 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:33:19.936968  137575 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 14:33:19.937071  137575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:33:19.996183  137575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-18 14:33:19.986250339 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:33:19.996789  137575 cni.go:84] Creating CNI manager for ""
	I1018 14:33:19.996857  137575 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 14:33:19.996897  137575 start.go:349] cluster config:
	{Name:functional-823635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-823635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:33:19.999720  137575 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 18 14:36:08 functional-823635 crio[3554]: time="2025-10-18T14:36:08.780946383Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=441f3caf-7c91-4eb0-8280-207debdf2851 name=/runtime.v1.ImageService/PullImage
	Oct 18 14:36:08 functional-823635 crio[3554]: time="2025-10-18T14:36:08.785021364Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 18 14:36:09 functional-823635 crio[3554]: time="2025-10-18T14:36:09.406753044Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=faa1823a-ffe8-49f6-8823-85667f3065be name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:36:09 functional-823635 crio[3554]: time="2025-10-18T14:36:09.407026642Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=faa1823a-ffe8-49f6-8823-85667f3065be name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:36:09 functional-823635 crio[3554]: time="2025-10-18T14:36:09.407090918Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=faa1823a-ffe8-49f6-8823-85667f3065be name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:36:20 functional-823635 crio[3554]: time="2025-10-18T14:36:20.996810411Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=05621c5a-bd6a-4e1d-8121-6fa9c40779a6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:36:20 functional-823635 crio[3554]: time="2025-10-18T14:36:20.997050973Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=05621c5a-bd6a-4e1d-8121-6fa9c40779a6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:36:20 functional-823635 crio[3554]: time="2025-10-18T14:36:20.997113229Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=05621c5a-bd6a-4e1d-8121-6fa9c40779a6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:36:40 functional-823635 crio[3554]: time="2025-10-18T14:36:40.113621274Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 18 14:36:55 functional-823635 crio[3554]: time="2025-10-18T14:36:55.357678257Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c6f98c45-d0c3-4493-8485-7793a8bb0b13 name=/runtime.v1.ImageService/PullImage
	Oct 18 14:36:55 functional-823635 crio[3554]: time="2025-10-18T14:36:55.35858914Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=04c2393e-30f0-402e-88e3-4852cf44a0c0 name=/runtime.v1.ImageService/PullImage
	Oct 18 14:36:55 functional-823635 crio[3554]: time="2025-10-18T14:36:55.35937317Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=bcf2188e-a529-4e19-b913-69f8c2a5289e name=/runtime.v1.ImageService/PullImage
	Oct 18 14:36:55 functional-823635 crio[3554]: time="2025-10-18T14:36:55.362536303Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 18 14:36:55 functional-823635 crio[3554]: time="2025-10-18T14:36:55.52587035Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=aa31b427-6590-4d9d-9ab9-7428f5a7b27a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:36:55 functional-823635 crio[3554]: time="2025-10-18T14:36:55.526105329Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=aa31b427-6590-4d9d-9ab9-7428f5a7b27a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:36:55 functional-823635 crio[3554]: time="2025-10-18T14:36:55.526163686Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 found" id=aa31b427-6590-4d9d-9ab9-7428f5a7b27a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:37:09 functional-823635 crio[3554]: time="2025-10-18T14:37:09.996569187Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a51175ed-02c9-412f-a456-789d3b7a360c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:37:09 functional-823635 crio[3554]: time="2025-10-18T14:37:09.996786085Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=a51175ed-02c9-412f-a456-789d3b7a360c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:37:09 functional-823635 crio[3554]: time="2025-10-18T14:37:09.996855974Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 found" id=a51175ed-02c9-412f-a456-789d3b7a360c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:37:26 functional-823635 crio[3554]: time="2025-10-18T14:37:26.691518379Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 18 14:37:58 functional-823635 crio[3554]: time="2025-10-18T14:37:58.024169154Z" level=info msg="Pulling image: docker.io/nginx:latest" id=d8b29e89-6361-4b6a-bd64-5b486bfcd06c name=/runtime.v1.ImageService/PullImage
	Oct 18 14:37:58 functional-823635 crio[3554]: time="2025-10-18T14:37:58.027458984Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 18 14:38:09 functional-823635 crio[3554]: time="2025-10-18T14:38:09.996410436Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=3299f41a-45ff-448a-b31a-f9878f764866 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:38:09 functional-823635 crio[3554]: time="2025-10-18T14:38:09.996555181Z" level=info msg="Image docker.io/nginx:alpine not found" id=3299f41a-45ff-448a-b31a-f9878f764866 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:38:09 functional-823635 crio[3554]: time="2025-10-18T14:38:09.996590105Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=3299f41a-45ff-448a-b31a-f9878f764866 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	dd337385fed92       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 minutes ago       Exited              mount-munger              0                   28fbf2e5ed90a       busybox-mount                               default
	93e37f902f52d       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da       10 minutes ago      Running             mysql                     0                   7138c0b9f3baa       mysql-5bb876957f-8kx2d                      default
	f4f02a115fe05       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      10 minutes ago      Running             kube-apiserver            0                   521e60ec1c4f9       kube-apiserver-functional-823635            kube-system
	510ec089b935b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      1                   fb579c4379df7       etcd-functional-823635                      kube-system
	480b306de65e2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      10 minutes ago      Running             kube-controller-manager   2                   3c1e4cb8043c7       kube-controller-manager-functional-823635   kube-system
	65a173c29bb11       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      10 minutes ago      Running             kube-scheduler            1                   bab606ab6f35c       kube-scheduler-functional-823635            kube-system
	c89c1234ce311       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      10 minutes ago      Exited              kube-controller-manager   1                   3c1e4cb8043c7       kube-controller-manager-functional-823635   kube-system
	b42a28b08255a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      10 minutes ago      Running             kindnet-cni               1                   8bd811be0c01b       kindnet-stt2s                               kube-system
	0973b81bb4630       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       1                   717cdc288a802       storage-provisioner                         kube-system
	aa85de7b0ebd5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 minutes ago      Running             coredns                   1                   fbb2b9970544a       coredns-66bc5c9577-zdmkg                    kube-system
	0caa23e95037b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      10 minutes ago      Running             kube-proxy                1                   a42ae80a6ca10       kube-proxy-b9mv2                            kube-system
	84e8835721763       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   0                   fbb2b9970544a       coredns-66bc5c9577-zdmkg                    kube-system
	79e2f9ff59d01       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       0                   717cdc288a802       storage-provisioner                         kube-system
	c98f6eb04b82a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      11 minutes ago      Exited              kindnet-cni               0                   8bd811be0c01b       kindnet-stt2s                               kube-system
	089827e2b297a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      11 minutes ago      Exited              kube-proxy                0                   a42ae80a6ca10       kube-proxy-b9mv2                            kube-system
	50c2de213fe05       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      11 minutes ago      Exited              kube-scheduler            0                   bab606ab6f35c       kube-scheduler-functional-823635            kube-system
	4e3e13eed9434       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Exited              etcd                      0                   fb579c4379df7       etcd-functional-823635                      kube-system
	
	
	==> coredns [84e8835721763a112dee2effc0c878e7ded9cfb104b777493d0895f93b72052a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37458 - 10645 "HINFO IN 7231688986531392643.6176522986195705811. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.141536718s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [aa85de7b0ebd51c3eafa74cf48260bda3fc8d2bb6a5326417290a56f26baf88d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38471 - 27374 "HINFO IN 7208865881712799088.8543358163044453170. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.104517016s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-823635
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-823635
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=functional-823635
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T14_26_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 14:26:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-823635
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 14:38:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 14:38:11 +0000   Sat, 18 Oct 2025 14:26:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 14:38:11 +0000   Sat, 18 Oct 2025 14:26:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 14:38:11 +0000   Sat, 18 Oct 2025 14:26:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 14:38:11 +0000   Sat, 18 Oct 2025 14:26:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-823635
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                c89c3cea-f79f-4b3e-bfa3-34b778dae193
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-kvjxl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m
	  default                     hello-node-connect-7d85dfc575-w6k84           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-8kx2d                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-zdmkg                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-823635                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-stt2s                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-823635              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-823635     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-b9mv2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-823635              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-xpl9m    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-c7xfp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-823635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-823635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-823635 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-823635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-823635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-823635 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-823635 event: Registered Node functional-823635 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-823635 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-823635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-823635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-823635 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-823635 event: Registered Node functional-823635 in Controller
	
	
	==> dmesg <==
	[  +0.096767] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026410] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.055938] kauditd_printk_skb: 47 callbacks suppressed
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [4e3e13eed943414a9ff5b1ecd1312e5c7eb4abbb35998a5258ffe489435019e7] <==
	{"level":"warn","ts":"2025-10-18T14:26:28.755035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:26:28.761237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:26:28.767616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:26:28.789314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:26:28.795391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:26:28.801751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:26:28.847994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53618","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T14:27:25.470976Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T14:27:25.471126Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-823635","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-18T14:27:25.471288Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T14:27:25.472865Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T14:27:25.472938Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:27:25.472958Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-18T14:27:25.473015Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T14:27:25.473030Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T14:27:25.473039Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:27:25.473053Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-18T14:27:25.473051Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-18T14:27:25.473080Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T14:27:25.473131Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T14:27:25.473146Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:27:25.475024Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-18T14:27:25.475099Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:27:25.475121Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-18T14:27:25.475126Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-823635","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [510ec089b935bcedf7d7d5aaaeac3889348d081ccb8cb04c9f0ac6b07b07ade4] <==
	{"level":"warn","ts":"2025-10-18T14:27:28.698601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.704720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.711243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.718181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.725019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.731605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.740539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.749051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.754942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.760868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.767374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.779310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.785480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.791363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.798293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.804427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.810389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.816761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.827965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.833772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.839574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.890018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47754","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T14:37:28.429191Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1013}
	{"level":"info","ts":"2025-10-18T14:37:28.448766Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1013,"took":"19.143375ms","hash":179156050,"current-db-size-bytes":3440640,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1540096,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-18T14:37:28.448812Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":179156050,"revision":1013,"compact-revision":-1}
	
	
	==> kernel <==
	 14:38:13 up  2:20,  0 user,  load average: 0.33, 0.26, 0.98
	Linux functional-823635 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b42a28b08255a3bef55d5fa86d732fafa60fa297ac485543f4dade8ec44bc21d] <==
	I1018 14:36:05.401688       1 main.go:301] handling current node
	I1018 14:36:15.402341       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:36:15.402371       1 main.go:301] handling current node
	I1018 14:36:25.399355       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:36:25.399395       1 main.go:301] handling current node
	I1018 14:36:35.402164       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:36:35.402197       1 main.go:301] handling current node
	I1018 14:36:45.397508       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:36:45.397550       1 main.go:301] handling current node
	I1018 14:36:55.398529       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:36:55.398563       1 main.go:301] handling current node
	I1018 14:37:05.401713       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:37:05.401749       1 main.go:301] handling current node
	I1018 14:37:15.401062       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:37:15.401098       1 main.go:301] handling current node
	I1018 14:37:25.399981       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:37:25.400016       1 main.go:301] handling current node
	I1018 14:37:35.402116       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:37:35.402153       1 main.go:301] handling current node
	I1018 14:37:45.401147       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:37:45.401201       1 main.go:301] handling current node
	I1018 14:37:55.399999       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:37:55.400056       1 main.go:301] handling current node
	I1018 14:38:05.400669       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:38:05.400705       1 main.go:301] handling current node
	
	
	==> kindnet [c98f6eb04b82aa0b7cc5310c19c7a42d5e988cb5dec7981b768a563ad8848a4a] <==
	I1018 14:26:38.183058       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 14:26:38.183348       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1018 14:26:38.183496       1 main.go:148] setting mtu 1500 for CNI 
	I1018 14:26:38.183512       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 14:26:38.183530       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T14:26:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 14:26:38.383580       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 14:26:38.384446       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 14:26:38.384488       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 14:26:38.384694       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 14:26:38.685469       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 14:26:38.685495       1 metrics.go:72] Registering metrics
	I1018 14:26:38.685541       1 controller.go:711] "Syncing nftables rules"
	I1018 14:26:48.387426       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:26:48.387493       1 main.go:301] handling current node
	I1018 14:26:58.391538       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:26:58.391584       1 main.go:301] handling current node
	I1018 14:27:08.388039       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:27:08.388070       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f4f02a115fe053e62f50a9933a8129933b890f1ad2341770ee4e1d3c244922a5] <==
	I1018 14:27:29.347489       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 14:27:29.365958       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 14:27:30.074021       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 14:27:30.235431       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1018 14:27:30.441146       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1018 14:27:30.442363       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 14:27:30.446356       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 14:27:30.830119       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 14:27:30.922433       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 14:27:30.969092       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 14:27:30.974118       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 14:27:35.090624       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 14:27:49.633545       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.137.242"}
	I1018 14:27:53.926329       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.47.131"}
	I1018 14:27:56.292249       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.212.182"}
	E1018 14:28:07.101429       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37014: use of closed network connection
	E1018 14:28:08.077420       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37036: use of closed network connection
	E1018 14:28:09.467461       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37056: use of closed network connection
	E1018 14:28:10.923014       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37066: use of closed network connection
	I1018 14:28:11.184020       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.236.205"}
	I1018 14:29:12.558896       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.86.84"}
	I1018 14:33:20.830094       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 14:33:20.936844       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.218.253"}
	I1018 14:33:20.949394       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.153.159"}
	I1018 14:37:29.249837       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [480b306de65e2b064686a5b46763f65b2d0c2ca241a51b9c215b6d051ec2a38d] <==
	I1018 14:27:32.679137       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 14:27:32.679066       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 14:27:32.679197       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 14:27:32.679077       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 14:27:32.680438       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 14:27:32.684675       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 14:27:32.684675       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 14:27:32.697837       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 14:27:32.697890       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 14:27:32.697924       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 14:27:32.697929       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 14:27:32.697933       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 14:27:32.699198       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:27:32.700246       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 14:27:32.700352       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 14:27:32.700442       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-823635"
	I1018 14:27:32.700494       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 14:27:32.702592       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 14:27:32.704881       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	E1018 14:33:20.879466       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:33:20.883478       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:33:20.886986       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:33:20.887939       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:33:20.889983       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:33:20.895450       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [c89c1234ce311e4076f08ce6733d8b7750437cd52cbbc34f8bb6350e20808e1b] <==
	I1018 14:27:15.230960       1 serving.go:386] Generated self-signed cert in-memory
	I1018 14:27:15.458255       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 14:27:15.458276       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:27:15.459506       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 14:27:15.459523       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1018 14:27:15.459855       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 14:27:15.459909       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 14:27:25.462220       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [089827e2b297a614c2c88054a3fe7135ff5aeb3e0210cb68fc9558b8469187ec] <==
	I1018 14:26:38.030805       1 server_linux.go:53] "Using iptables proxy"
	I1018 14:26:38.099565       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 14:26:38.200564       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:26:38.200602       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 14:26:38.200680       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:26:38.219018       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 14:26:38.219062       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:26:38.224155       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:26:38.224526       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:26:38.224562       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:26:38.225873       1 config.go:200] "Starting service config controller"
	I1018 14:26:38.225886       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:26:38.225902       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:26:38.225908       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:26:38.225988       1 config.go:309] "Starting node config controller"
	I1018 14:26:38.225995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:26:38.225985       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:26:38.226005       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:26:38.226001       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:26:38.326096       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:26:38.326097       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 14:26:38.327398       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [0caa23e95037ba4b680939efa51d43b1deef3bdd2d7fe3bc6b60e2776dd86054] <==
	I1018 14:27:15.067811       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1018 14:27:15.068838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-823635&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:27:16.326596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-823635&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:27:19.026736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-823635&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:27:24.752138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-823635&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1018 14:27:32.168784       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:27:32.168848       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 14:27:32.169004       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:27:32.189101       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 14:27:32.189152       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:27:32.194889       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:27:32.195323       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:27:32.195370       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:27:32.196813       1 config.go:309] "Starting node config controller"
	I1018 14:27:32.196835       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:27:32.196844       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:27:32.196859       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:27:32.196864       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:27:32.196893       1 config.go:200] "Starting service config controller"
	I1018 14:27:32.196900       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:27:32.196942       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:27:32.196949       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:27:32.297496       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 14:27:32.297525       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:27:32.297519       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [50c2de213fe05086fd0f202bc87d4794e5cf06d1e90bc2b581c33039db5afeb7] <==
	E1018 14:26:29.246389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:26:29.246436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:26:29.246477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 14:26:29.246538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 14:26:29.246590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 14:26:30.050359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:26:30.096722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 14:26:30.134148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 14:26:30.273298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 14:26:30.343340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 14:26:30.365835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 14:26:30.368882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:26:30.386971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:26:30.392044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:26:30.419056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 14:26:30.430286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:26:30.484589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:26:30.501822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1018 14:26:30.844191       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:27:14.845369       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:27:14.845432       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 14:27:14.845513       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 14:27:14.845540       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 14:27:14.845547       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 14:27:14.845571       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [65a173c29bb11fa7b34e191eaa9aeef81e739e6edee0571260868bfa4f411b94] <==
	E1018 14:27:20.428735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 14:27:20.442371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:27:20.466841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 14:27:20.492518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:27:20.968943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:27:23.394557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 14:27:23.960214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 14:27:24.072090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 14:27:24.135977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:27:24.269897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:27:24.309794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 14:27:24.354393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:27:24.434450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 14:27:24.800416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 14:27:24.947637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 14:27:25.005447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:27:25.043088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:27:25.138956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 14:27:25.523395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:27:25.798266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 14:27:26.001045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 14:27:26.051712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:27:26.794417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 14:27:26.836204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1018 14:27:35.488262       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 14:36:55 functional-823635 kubelet[4121]: E1018 14:36:55.358251    4121 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 18 14:36:55 functional-823635 kubelet[4121]: E1018 14:36:55.358474    4121 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-w6k84_default(afc7f7cc-801e-451a-a546-408cff4e3833): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Oct 18 14:36:55 functional-823635 kubelet[4121]: E1018 14:36:55.358529    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w6k84" podUID="afc7f7cc-801e-451a-a546-408cff4e3833"
	Oct 18 14:36:55 functional-823635 kubelet[4121]: E1018 14:36:55.358906    4121 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 18 14:36:55 functional-823635 kubelet[4121]: E1018 14:36:55.358972    4121 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 18 14:36:55 functional-823635 kubelet[4121]: E1018 14:36:55.359167    4121 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-kvjxl_default(c0e2f316-9809-4f04-8b57-14e8eb1f0204): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Oct 18 14:36:55 functional-823635 kubelet[4121]: E1018 14:36:55.360501    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-kvjxl" podUID="c0e2f316-9809-4f04-8b57-14e8eb1f0204"
	Oct 18 14:36:55 functional-823635 kubelet[4121]: E1018 14:36:55.526476    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c7xfp" podUID="95c9f8bb-f768-4d5e-ba3b-dccc22757ed0"
	Oct 18 14:37:06 functional-823635 kubelet[4121]: E1018 14:37:06.996693    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w6k84" podUID="afc7f7cc-801e-451a-a546-408cff4e3833"
	Oct 18 14:37:09 functional-823635 kubelet[4121]: E1018 14:37:09.996416    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-kvjxl" podUID="c0e2f316-9809-4f04-8b57-14e8eb1f0204"
	Oct 18 14:37:19 functional-823635 kubelet[4121]: E1018 14:37:19.996018    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w6k84" podUID="afc7f7cc-801e-451a-a546-408cff4e3833"
	Oct 18 14:37:21 functional-823635 kubelet[4121]: E1018 14:37:21.996002    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-kvjxl" podUID="c0e2f316-9809-4f04-8b57-14e8eb1f0204"
	Oct 18 14:37:33 functional-823635 kubelet[4121]: E1018 14:37:33.996232    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-kvjxl" podUID="c0e2f316-9809-4f04-8b57-14e8eb1f0204"
	Oct 18 14:37:34 functional-823635 kubelet[4121]: E1018 14:37:34.996637    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w6k84" podUID="afc7f7cc-801e-451a-a546-408cff4e3833"
	Oct 18 14:37:46 functional-823635 kubelet[4121]: E1018 14:37:46.996881    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w6k84" podUID="afc7f7cc-801e-451a-a546-408cff4e3833"
	Oct 18 14:37:46 functional-823635 kubelet[4121]: E1018 14:37:46.997082    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-kvjxl" podUID="c0e2f316-9809-4f04-8b57-14e8eb1f0204"
	Oct 18 14:37:58 functional-823635 kubelet[4121]: E1018 14:37:58.023599    4121 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 18 14:37:58 functional-823635 kubelet[4121]: E1018 14:37:58.023670    4121 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 18 14:37:58 functional-823635 kubelet[4121]: E1018 14:37:58.024056    4121 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx-svc_default(ee39520e-b6d9-4fe9-824c-db1d0b2e661e): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 14:37:58 functional-823635 kubelet[4121]: E1018 14:37:58.024134    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ee39520e-b6d9-4fe9-824c-db1d0b2e661e"
	Oct 18 14:37:58 functional-823635 kubelet[4121]: E1018 14:37:58.997885    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w6k84" podUID="afc7f7cc-801e-451a-a546-408cff4e3833"
	Oct 18 14:37:58 functional-823635 kubelet[4121]: E1018 14:37:58.997988    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-kvjxl" podUID="c0e2f316-9809-4f04-8b57-14e8eb1f0204"
	Oct 18 14:38:09 functional-823635 kubelet[4121]: E1018 14:38:09.996962    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ee39520e-b6d9-4fe9-824c-db1d0b2e661e"
	Oct 18 14:38:10 functional-823635 kubelet[4121]: E1018 14:38:10.995702    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-kvjxl" podUID="c0e2f316-9809-4f04-8b57-14e8eb1f0204"
	Oct 18 14:38:12 functional-823635 kubelet[4121]: E1018 14:38:12.995866    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w6k84" podUID="afc7f7cc-801e-451a-a546-408cff4e3833"
	
	
	==> storage-provisioner [0973b81bb46303994a6ee55425593dc6f31b75283660128cdf4b2aca621b2db0] <==
	W1018 14:37:48.763364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:50.766841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:50.770821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:52.774128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:52.778325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:54.781845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:54.787495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:56.791153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:56.795245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:58.799232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:37:58.802985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:00.805766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:00.811220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:02.814527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:02.819952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:04.823090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:04.827179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:06.830485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:06.836066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:08.839280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:08.844019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:10.847043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:10.850840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:12.853868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:38:12.857971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [79e2f9ff59d010e391ea9ba1565688857cee3f6061e58fe816e0fd7bc5464d4c] <==
	I1018 14:26:49.443004       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-823635_bdc6698e-30cc-41ba-8640-be0d38b72921!
	W1018 14:26:51.350611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:51.356452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:53.359516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:53.363277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:55.366759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:55.371592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:57.374629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:57.378317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:59.381478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:59.385461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:01.388382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:01.392241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:03.395864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:03.400171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:05.403696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:05.407592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:07.410344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:07.415319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:09.418313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:09.422093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:11.425619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:11.431210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:13.434190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:13.437932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-823635 -n functional-823635
helpers_test.go:269: (dbg) Run:  kubectl --context functional-823635 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-kvjxl hello-node-connect-7d85dfc575-w6k84 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xpl9m kubernetes-dashboard-855c9754f9-c7xfp
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-823635 describe pod busybox-mount hello-node-75c85bcc94-kvjxl hello-node-connect-7d85dfc575-w6k84 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xpl9m kubernetes-dashboard-855c9754f9-c7xfp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-823635 describe pod busybox-mount hello-node-75c85bcc94-kvjxl hello-node-connect-7d85dfc575-w6k84 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xpl9m kubernetes-dashboard-855c9754f9-c7xfp: exit status 1 (91.114342ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-823635/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:27:57 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  cri-o://dd337385fed92e05f2a197d6d7595005b37fb5bf9065adcfef39a71b932525fe
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 18 Oct 2025 14:29:05 +0000
	      Finished:     Sat, 18 Oct 2025 14:29:05 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p6k4w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-p6k4w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-823635
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m8s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.888s (1m4.677s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m8s  kubelet            Created container: mount-munger
	  Normal  Started    9m8s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-kvjxl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-823635/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:29:12 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n5b64 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n5b64:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  9m1s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-kvjxl to functional-823635
	  Normal   Pulling    3m41s (x4 over 9m1s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     78s (x4 over 8m3s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     78s (x4 over 8m3s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    3s (x11 over 8m2s)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     3s (x11 over 8m2s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-w6k84
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-823635/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:28:11 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6szb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s6szb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w6k84 to functional-823635
	  Normal   Pulling    3m44s (x4 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     78s (x4 over 8m3s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     78s (x4 over 8m3s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x10 over 8m2s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     1s (x10 over 8m2s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-823635/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:27:56 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b946b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-b946b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/nginx-svc to functional-823635
	  Warning  Failed     4m25s                kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m37s (x4 over 10m)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     15s (x3 over 9m10s)  kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     15s (x4 over 9m10s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    4s (x6 over 9m9s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4s (x6 over 9m9s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-823635/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:27:59 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fxjlk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-fxjlk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/sp-pod to functional-823635
	  Warning  Failed     8m3s                  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m43s                 kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m22s (x3 over 8m3s)  kubelet            Error: ErrImagePull
	  Warning  Failed     3m22s                 kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m49s (x5 over 8m2s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m49s (x5 over 8m2s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m34s (x4 over 10m)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-xpl9m" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-c7xfp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-823635 describe pod busybox-mount hello-node-75c85bcc94-kvjxl hello-node-connect-7d85dfc575-w6k84 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xpl9m kubernetes-dashboard-855c9754f9-c7xfp: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.86s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (369.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [e305bad0-7db2-4452-89a0-70e221694143] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004066964s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-823635 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-823635 apply -f testdata/storage-provisioner/pvc.yaml
E1018 14:27:59.642658   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-823635 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-823635 apply -f testdata/storage-provisioner/pod.yaml
I1018 14:27:59.914611   93187 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0197dc5f-60ed-4050-b6ee-2bb44454bbd7] Pending
helpers_test.go:352: "sp-pod" [0197dc5f-60ed-4050-b6ee-2bb44454bbd7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1018 14:28:00.924289   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-823635 -n functional-823635
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-10-18 14:34:00.23311826 +0000 UTC m=+1140.740121210
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-823635 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-823635 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-823635/192.168.49.2
Start Time:       Sat, 18 Oct 2025 14:27:59 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fxjlk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-fxjlk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m1s                 default-scheduler  Successfully assigned default/sp-pod to functional-823635
Warning  Failed     3m50s                kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     90s (x2 over 3m50s)  kubelet            Error: ErrImagePull
Warning  Failed     90s                  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    78s (x2 over 3m49s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     78s (x2 over 3m49s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    66s (x3 over 6m)     kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-823635 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-823635 logs sp-pod -n default: exit status 1 (67.854103ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-823635 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-823635
helpers_test.go:243: (dbg) docker inspect functional-823635:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e",
	        "Created": "2025-10-18T14:26:18.436737288Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 120898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T14:26:18.470822803Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e/hostname",
	        "HostsPath": "/var/lib/docker/containers/0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e/hosts",
	        "LogPath": "/var/lib/docker/containers/0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e/0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e-json.log",
	        "Name": "/functional-823635",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-823635:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-823635",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0cd7caf20b475a4bad6d1eb25c6ac528409ab8163ad3c2cd94134be39235bb4e",
	                "LowerDir": "/var/lib/docker/overlay2/5082ed7a7d4f3cdf2f9271d923fd3b0d056c6762d4c76a1ba4517906c5b1b1bf-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5082ed7a7d4f3cdf2f9271d923fd3b0d056c6762d4c76a1ba4517906c5b1b1bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5082ed7a7d4f3cdf2f9271d923fd3b0d056c6762d4c76a1ba4517906c5b1b1bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5082ed7a7d4f3cdf2f9271d923fd3b0d056c6762d4c76a1ba4517906c5b1b1bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-823635",
	                "Source": "/var/lib/docker/volumes/functional-823635/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-823635",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-823635",
	                "name.minikube.sigs.k8s.io": "functional-823635",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fd03071fcaf4a0d71521bffa1eb6767116fc7bde333deaa49c9042ef66155301",
	            "SandboxKey": "/var/run/docker/netns/fd03071fcaf4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-823635": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:17:41:74:60:89",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ac30e007c0f5497eef211625aeba4ddabc991ddfbfb64985fe205fdaca6d7800",
	                    "EndpointID": "70ea1dea80c293bd2b5dcbe0155d3887dc026762d0c6aed349ff54f357d2d760",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-823635",
	                        "0cd7caf20b47"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-823635 -n functional-823635
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-823635 logs -n 25: (1.273576438s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-823635 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2868645048/001:/mount1 --alsologtostderr -v=1                                              │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:29 UTC │                     │
	│ mount     │ -p functional-823635 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2868645048/001:/mount3 --alsologtostderr -v=1                                              │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:29 UTC │                     │
	│ mount     │ -p functional-823635 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2868645048/001:/mount2 --alsologtostderr -v=1                                              │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:29 UTC │                     │
	│ ssh       │ functional-823635 ssh findmnt -T /mount1                                                                                                                        │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:29 UTC │ 18 Oct 25 14:29 UTC │
	│ ssh       │ functional-823635 ssh findmnt -T /mount2                                                                                                                        │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:29 UTC │ 18 Oct 25 14:29 UTC │
	│ ssh       │ functional-823635 ssh findmnt -T /mount3                                                                                                                        │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:29 UTC │ 18 Oct 25 14:29 UTC │
	│ mount     │ -p functional-823635 --kill=true                                                                                                                                │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:29 UTC │                     │
	│ license   │                                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ start     │ -p functional-823635 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │                     │
	│ ssh       │ functional-823635 ssh sudo systemctl is-active docker                                                                                                           │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │                     │
	│ ssh       │ functional-823635 ssh sudo systemctl is-active containerd                                                                                                       │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │                     │
	│ image     │ functional-823635 image load --daemon kicbase/echo-server:functional-823635 --alsologtostderr                                                                   │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image     │ functional-823635 image ls                                                                                                                                      │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image     │ functional-823635 image load --daemon kicbase/echo-server:functional-823635 --alsologtostderr                                                                   │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image     │ functional-823635 image ls                                                                                                                                      │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image     │ functional-823635 image load --daemon kicbase/echo-server:functional-823635 --alsologtostderr                                                                   │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image     │ functional-823635 image ls                                                                                                                                      │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image     │ functional-823635 image save kicbase/echo-server:functional-823635 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image     │ functional-823635 image rm kicbase/echo-server:functional-823635 --alsologtostderr                                                                              │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image     │ functional-823635 image ls                                                                                                                                      │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image     │ functional-823635 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ image     │ functional-823635 image save --daemon kicbase/echo-server:functional-823635 --alsologtostderr                                                                   │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │ 18 Oct 25 14:33 UTC │
	│ start     │ -p functional-823635 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │                     │
	│ start     │ -p functional-823635 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                 │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-823635 --alsologtostderr -v=1                                                                                                  │ functional-823635 │ jenkins │ v1.37.0 │ 18 Oct 25 14:33 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 14:33:19
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 14:33:19.837101  137575 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:33:19.837214  137575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:33:19.837219  137575 out.go:374] Setting ErrFile to fd 2...
	I1018 14:33:19.837224  137575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:33:19.837429  137575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:33:19.837864  137575 out.go:368] Setting JSON to false
	I1018 14:33:19.838774  137575 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":8151,"bootTime":1760789849,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:33:19.838873  137575 start.go:141] virtualization: kvm guest
	I1018 14:33:19.840818  137575 out.go:179] * [functional-823635] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 14:33:19.842355  137575 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 14:33:19.842339  137575 notify.go:220] Checking for updates...
	I1018 14:33:19.843868  137575 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:33:19.845097  137575 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 14:33:19.846215  137575 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 14:33:19.847364  137575 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 14:33:19.848501  137575 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 14:33:19.850035  137575 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:33:19.850568  137575 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:33:19.873951  137575 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 14:33:19.874066  137575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:33:19.933740  137575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-18 14:33:19.923050776 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:33:19.933880  137575 docker.go:318] overlay module found
	I1018 14:33:19.935576  137575 out.go:179] * Using the docker driver based on existing profile
	I1018 14:33:19.936824  137575 start.go:305] selected driver: docker
	I1018 14:33:19.936841  137575 start.go:925] validating driver "docker" against &{Name:functional-823635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-823635 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:33:19.936968  137575 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 14:33:19.937071  137575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:33:19.996183  137575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-18 14:33:19.986250339 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:33:19.996789  137575 cni.go:84] Creating CNI manager for ""
	I1018 14:33:19.996857  137575 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 14:33:19.996897  137575 start.go:349] cluster config:
	{Name:functional-823635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-823635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:33:19.999720  137575 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 18 14:33:21 functional-823635 crio[3554]: time="2025-10-18T14:33:21.235398741Z" level=info msg="Running pod sandbox: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c7xfp/POD" id=cc7c28e3-f1b6-4dc2-bc21-7c52a8dff443 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 14:33:21 functional-823635 crio[3554]: time="2025-10-18T14:33:21.235495558Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 14:33:21 functional-823635 crio[3554]: time="2025-10-18T14:33:21.240647049Z" level=info msg="Got pod network &{Name:kubernetes-dashboard-855c9754f9-c7xfp Namespace:kubernetes-dashboard ID:79698721b4f58de96d8907ad66fd0e15592b959192ab41e9d703a365b1d8e1c4 UID:95c9f8bb-f768-4d5e-ba3b-dccc22757ed0 NetNS:/var/run/netns/71db5d73-94a9-4f70-8a6f-9e2607ecfa74 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000a024f0}] Aliases:map[]}"
	Oct 18 14:33:21 functional-823635 crio[3554]: time="2025-10-18T14:33:21.240675589Z" level=info msg="Adding pod kubernetes-dashboard_kubernetes-dashboard-855c9754f9-c7xfp to CNI network \"kindnet\" (type=ptp)"
	Oct 18 14:33:21 functional-823635 crio[3554]: time="2025-10-18T14:33:21.243008214Z" level=info msg="Got pod network &{Name:dashboard-metrics-scraper-77bf4d6c4c-xpl9m Namespace:kubernetes-dashboard ID:3a3afea60c21cc11a87ef8caf0bcd9273afd250b2f4b90f393b19d429f285f9d UID:1ac6a078-09b2-482f-a9b4-406cb297f6c9 NetNS:/var/run/netns/29096e53-cddc-4b64-b2b3-7b08977a1ca7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00091c4c0}] Aliases:map[]}"
	Oct 18 14:33:21 functional-823635 crio[3554]: time="2025-10-18T14:33:21.243201195Z" level=info msg="Checking pod kubernetes-dashboard_dashboard-metrics-scraper-77bf4d6c4c-xpl9m for CNI network kindnet (type=ptp)"
	Oct 18 14:33:21 functional-823635 crio[3554]: time="2025-10-18T14:33:21.244150098Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 14:33:21 functional-823635 crio[3554]: time="2025-10-18T14:33:21.244978712Z" level=info msg="Ran pod sandbox 3a3afea60c21cc11a87ef8caf0bcd9273afd250b2f4b90f393b19d429f285f9d with infra container: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-xpl9m/POD" id=bbbae410-69b0-460b-a9c7-9ec5cafbda0d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 14:33:21 functional-823635 crio[3554]: time="2025-10-18T14:33:21.246363195Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=80180e03-74ec-403f-8732-9108d608c98c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:33:21 functional-823635 crio[3554]: time="2025-10-18T14:33:21.246551275Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=80180e03-74ec-403f-8732-9108d608c98c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:33:21 functional-823635 crio[3554]: time="2025-10-18T14:33:21.246624044Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=80180e03-74ec-403f-8732-9108d608c98c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:33:21 functional-823635 crio[3554]: time="2025-10-18T14:33:21.251580734Z" level=info msg="Got pod network &{Name:kubernetes-dashboard-855c9754f9-c7xfp Namespace:kubernetes-dashboard ID:79698721b4f58de96d8907ad66fd0e15592b959192ab41e9d703a365b1d8e1c4 UID:95c9f8bb-f768-4d5e-ba3b-dccc22757ed0 NetNS:/var/run/netns/71db5d73-94a9-4f70-8a6f-9e2607ecfa74 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000a024f0}] Aliases:map[]}"
	Oct 18 14:33:21 functional-823635 crio[3554]: time="2025-10-18T14:33:21.251740266Z" level=info msg="Checking pod kubernetes-dashboard_kubernetes-dashboard-855c9754f9-c7xfp for CNI network kindnet (type=ptp)"
	Oct 18 14:33:21 functional-823635 crio[3554]: time="2025-10-18T14:33:21.25272177Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 14:33:21 functional-823635 crio[3554]: time="2025-10-18T14:33:21.254178725Z" level=info msg="Ran pod sandbox 79698721b4f58de96d8907ad66fd0e15592b959192ab41e9d703a365b1d8e1c4 with infra container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c7xfp/POD" id=cc7c28e3-f1b6-4dc2-bc21-7c52a8dff443 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 14:33:21 functional-823635 crio[3554]: time="2025-10-18T14:33:21.255285568Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=c4757b04-f174-48bf-aed3-14b42dad5db9 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:33:21 functional-823635 crio[3554]: time="2025-10-18T14:33:21.255448004Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=c4757b04-f174-48bf-aed3-14b42dad5db9 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:33:21 functional-823635 crio[3554]: time="2025-10-18T14:33:21.25551053Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 found" id=c4757b04-f174-48bf-aed3-14b42dad5db9 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:33:48 functional-823635 crio[3554]: time="2025-10-18T14:33:48.784170223Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b4762cae-137e-4c39-85ac-fbc5b7b5a9ac name=/runtime.v1.ImageService/PullImage
	Oct 18 14:33:48 functional-823635 crio[3554]: time="2025-10-18T14:33:48.784974074Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=43642ef9-2392-405b-8201-11c06566e15c name=/runtime.v1.ImageService/PullImage
	Oct 18 14:33:48 functional-823635 crio[3554]: time="2025-10-18T14:33:48.785707105Z" level=info msg="Pulling image: docker.io/nginx:latest" id=18619c35-950a-47e9-a764-2072440442cc name=/runtime.v1.ImageService/PullImage
	Oct 18 14:33:48 functional-823635 crio[3554]: time="2025-10-18T14:33:48.788477151Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 18 14:34:00 functional-823635 crio[3554]: time="2025-10-18T14:34:00.996762098Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=40601f85-79a3-4342-99a0-e9249eb4c75b name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:34:00 functional-823635 crio[3554]: time="2025-10-18T14:34:00.997046148Z" level=info msg="Image docker.io/nginx:alpine not found" id=40601f85-79a3-4342-99a0-e9249eb4c75b name=/runtime.v1.ImageService/ImageStatus
	Oct 18 14:34:00 functional-823635 crio[3554]: time="2025-10-18T14:34:00.997117409Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=40601f85-79a3-4342-99a0-e9249eb4c75b name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	dd337385fed92       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   4 minutes ago       Exited              mount-munger              0                   28fbf2e5ed90a       busybox-mount                               default
	93e37f902f52d       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da       6 minutes ago       Running             mysql                     0                   7138c0b9f3baa       mysql-5bb876957f-8kx2d                      default
	f4f02a115fe05       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      6 minutes ago       Running             kube-apiserver            0                   521e60ec1c4f9       kube-apiserver-functional-823635            kube-system
	510ec089b935b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      6 minutes ago       Running             etcd                      1                   fb579c4379df7       etcd-functional-823635                      kube-system
	480b306de65e2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      6 minutes ago       Running             kube-controller-manager   2                   3c1e4cb8043c7       kube-controller-manager-functional-823635   kube-system
	65a173c29bb11       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      6 minutes ago       Running             kube-scheduler            1                   bab606ab6f35c       kube-scheduler-functional-823635            kube-system
	c89c1234ce311       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      6 minutes ago       Exited              kube-controller-manager   1                   3c1e4cb8043c7       kube-controller-manager-functional-823635   kube-system
	b42a28b08255a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      6 minutes ago       Running             kindnet-cni               1                   8bd811be0c01b       kindnet-stt2s                               kube-system
	0973b81bb4630       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       1                   717cdc288a802       storage-provisioner                         kube-system
	aa85de7b0ebd5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      6 minutes ago       Running             coredns                   1                   fbb2b9970544a       coredns-66bc5c9577-zdmkg                    kube-system
	0caa23e95037b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      6 minutes ago       Running             kube-proxy                1                   a42ae80a6ca10       kube-proxy-b9mv2                            kube-system
	84e8835721763       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Exited              coredns                   0                   fbb2b9970544a       coredns-66bc5c9577-zdmkg                    kube-system
	79e2f9ff59d01       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       0                   717cdc288a802       storage-provisioner                         kube-system
	c98f6eb04b82a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      7 minutes ago       Exited              kindnet-cni               0                   8bd811be0c01b       kindnet-stt2s                               kube-system
	089827e2b297a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      7 minutes ago       Exited              kube-proxy                0                   a42ae80a6ca10       kube-proxy-b9mv2                            kube-system
	50c2de213fe05       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      7 minutes ago       Exited              kube-scheduler            0                   bab606ab6f35c       kube-scheduler-functional-823635            kube-system
	4e3e13eed9434       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      7 minutes ago       Exited              etcd                      0                   fb579c4379df7       etcd-functional-823635                      kube-system
	
	
	==> coredns [84e8835721763a112dee2effc0c878e7ded9cfb104b777493d0895f93b72052a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37458 - 10645 "HINFO IN 7231688986531392643.6176522986195705811. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.141536718s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [aa85de7b0ebd51c3eafa74cf48260bda3fc8d2bb6a5326417290a56f26baf88d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38471 - 27374 "HINFO IN 7208865881712799088.8543358163044453170. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.104517016s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-823635
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-823635
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=functional-823635
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T14_26_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 14:26:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-823635
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 14:33:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 14:32:34 +0000   Sat, 18 Oct 2025 14:26:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 14:32:34 +0000   Sat, 18 Oct 2025 14:26:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 14:32:34 +0000   Sat, 18 Oct 2025 14:26:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 14:32:34 +0000   Sat, 18 Oct 2025 14:26:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-823635
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                c89c3cea-f79f-4b3e-bfa3-34b778dae193
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-kvjxl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  default                     hello-node-connect-7d85dfc575-w6k84           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  default                     mysql-5bb876957f-8kx2d                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     6m8s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-zdmkg                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m24s
	  kube-system                 etcd-functional-823635                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m30s
	  kube-system                 kindnet-stt2s                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m24s
	  kube-system                 kube-apiserver-functional-823635              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-controller-manager-functional-823635     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 kube-proxy-b9mv2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 kube-scheduler-functional-823635              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-xpl9m    0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-c7xfp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m23s                  kube-proxy       
	  Normal  Starting                 6m29s                  kube-proxy       
	  Normal  Starting                 7m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m34s (x8 over 7m34s)  kubelet          Node functional-823635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m34s (x8 over 7m34s)  kubelet          Node functional-823635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m34s (x8 over 7m34s)  kubelet          Node functional-823635 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    7m30s                  kubelet          Node functional-823635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m30s                  kubelet          Node functional-823635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     7m30s                  kubelet          Node functional-823635 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m30s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m25s                  node-controller  Node functional-823635 event: Registered Node functional-823635 in Controller
	  Normal  NodeReady                7m13s                  kubelet          Node functional-823635 status is now: NodeReady
	  Normal  Starting                 6m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m34s (x8 over 6m34s)  kubelet          Node functional-823635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s (x8 over 6m34s)  kubelet          Node functional-823635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s (x8 over 6m34s)  kubelet          Node functional-823635 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m29s                  node-controller  Node functional-823635 event: Registered Node functional-823635 in Controller
	
	
	==> dmesg <==
	[  +0.096767] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026410] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.055938] kauditd_printk_skb: 47 callbacks suppressed
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [4e3e13eed943414a9ff5b1ecd1312e5c7eb4abbb35998a5258ffe489435019e7] <==
	{"level":"warn","ts":"2025-10-18T14:26:28.755035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:26:28.761237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:26:28.767616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:26:28.789314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:26:28.795391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:26:28.801751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:26:28.847994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53618","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T14:27:25.470976Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T14:27:25.471126Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-823635","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-18T14:27:25.471288Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T14:27:25.472865Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T14:27:25.472938Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:27:25.472958Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-18T14:27:25.473015Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T14:27:25.473030Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T14:27:25.473039Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:27:25.473053Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-18T14:27:25.473051Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-18T14:27:25.473080Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T14:27:25.473131Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T14:27:25.473146Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:27:25.475024Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-18T14:27:25.475099Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T14:27:25.475121Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-18T14:27:25.475126Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-823635","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [510ec089b935bcedf7d7d5aaaeac3889348d081ccb8cb04c9f0ac6b07b07ade4] <==
	{"level":"warn","ts":"2025-10-18T14:27:28.676491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.682448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.691366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.698601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.704720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.711243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.718181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.725019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.731605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.740539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.749051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.754942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.760868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.767374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.779310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.785480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.791363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.798293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.804427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.810389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.816761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.827965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.833772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.839574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T14:27:28.890018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47754","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:34:01 up  2:16,  0 user,  load average: 0.25, 0.42, 1.26
	Linux functional-823635 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b42a28b08255a3bef55d5fa86d732fafa60fa297ac485543f4dade8ec44bc21d] <==
	I1018 14:31:55.397875       1 main.go:301] handling current node
	I1018 14:32:05.398097       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:32:05.398150       1 main.go:301] handling current node
	I1018 14:32:15.399238       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:32:15.399272       1 main.go:301] handling current node
	I1018 14:32:25.398037       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:32:25.398072       1 main.go:301] handling current node
	I1018 14:32:35.397851       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:32:35.397896       1 main.go:301] handling current node
	I1018 14:32:45.398398       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:32:45.398437       1 main.go:301] handling current node
	I1018 14:32:55.397937       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:32:55.397999       1 main.go:301] handling current node
	I1018 14:33:05.398079       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:33:05.398115       1 main.go:301] handling current node
	I1018 14:33:15.404987       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:33:15.405017       1 main.go:301] handling current node
	I1018 14:33:25.398190       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:33:25.398243       1 main.go:301] handling current node
	I1018 14:33:35.398255       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:33:35.398306       1 main.go:301] handling current node
	I1018 14:33:45.398431       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:33:45.398460       1 main.go:301] handling current node
	I1018 14:33:55.401049       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:33:55.401087       1 main.go:301] handling current node
	
	
	==> kindnet [c98f6eb04b82aa0b7cc5310c19c7a42d5e988cb5dec7981b768a563ad8848a4a] <==
	I1018 14:26:38.183058       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 14:26:38.183348       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1018 14:26:38.183496       1 main.go:148] setting mtu 1500 for CNI 
	I1018 14:26:38.183512       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 14:26:38.183530       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T14:26:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 14:26:38.383580       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 14:26:38.384446       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 14:26:38.384488       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 14:26:38.384694       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 14:26:38.685469       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 14:26:38.685495       1 metrics.go:72] Registering metrics
	I1018 14:26:38.685541       1 controller.go:711] "Syncing nftables rules"
	I1018 14:26:48.387426       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:26:48.387493       1 main.go:301] handling current node
	I1018 14:26:58.391538       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:26:58.391584       1 main.go:301] handling current node
	I1018 14:27:08.388039       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 14:27:08.388070       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f4f02a115fe053e62f50a9933a8129933b890f1ad2341770ee4e1d3c244922a5] <==
	I1018 14:27:29.346178       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 14:27:29.347489       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 14:27:29.365958       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 14:27:30.074021       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 14:27:30.235431       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1018 14:27:30.441146       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1018 14:27:30.442363       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 14:27:30.446356       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 14:27:30.830119       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 14:27:30.922433       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 14:27:30.969092       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 14:27:30.974118       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 14:27:35.090624       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 14:27:49.633545       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.137.242"}
	I1018 14:27:53.926329       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.47.131"}
	I1018 14:27:56.292249       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.212.182"}
	E1018 14:28:07.101429       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37014: use of closed network connection
	E1018 14:28:08.077420       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37036: use of closed network connection
	E1018 14:28:09.467461       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37056: use of closed network connection
	E1018 14:28:10.923014       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37066: use of closed network connection
	I1018 14:28:11.184020       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.236.205"}
	I1018 14:29:12.558896       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.86.84"}
	I1018 14:33:20.830094       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 14:33:20.936844       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.218.253"}
	I1018 14:33:20.949394       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.153.159"}
	
	
	==> kube-controller-manager [480b306de65e2b064686a5b46763f65b2d0c2ca241a51b9c215b6d051ec2a38d] <==
	I1018 14:27:32.679137       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 14:27:32.679066       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 14:27:32.679197       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 14:27:32.679077       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 14:27:32.680438       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 14:27:32.684675       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 14:27:32.684675       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 14:27:32.697837       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 14:27:32.697890       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 14:27:32.697924       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 14:27:32.697929       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 14:27:32.697933       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 14:27:32.699198       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 14:27:32.700246       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 14:27:32.700352       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 14:27:32.700442       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-823635"
	I1018 14:27:32.700494       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 14:27:32.702592       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 14:27:32.704881       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	E1018 14:33:20.879466       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:33:20.883478       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:33:20.886986       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:33:20.887939       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:33:20.889983       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 14:33:20.895450       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [c89c1234ce311e4076f08ce6733d8b7750437cd52cbbc34f8bb6350e20808e1b] <==
	I1018 14:27:15.230960       1 serving.go:386] Generated self-signed cert in-memory
	I1018 14:27:15.458255       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 14:27:15.458276       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:27:15.459506       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 14:27:15.459523       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1018 14:27:15.459855       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 14:27:15.459909       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 14:27:25.462220       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [089827e2b297a614c2c88054a3fe7135ff5aeb3e0210cb68fc9558b8469187ec] <==
	I1018 14:26:38.030805       1 server_linux.go:53] "Using iptables proxy"
	I1018 14:26:38.099565       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 14:26:38.200564       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:26:38.200602       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 14:26:38.200680       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:26:38.219018       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 14:26:38.219062       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:26:38.224155       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:26:38.224526       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:26:38.224562       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:26:38.225873       1 config.go:200] "Starting service config controller"
	I1018 14:26:38.225886       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:26:38.225902       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:26:38.225908       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:26:38.225988       1 config.go:309] "Starting node config controller"
	I1018 14:26:38.225995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:26:38.225985       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:26:38.226005       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:26:38.226001       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:26:38.326096       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:26:38.326097       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 14:26:38.327398       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [0caa23e95037ba4b680939efa51d43b1deef3bdd2d7fe3bc6b60e2776dd86054] <==
	I1018 14:27:15.067811       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1018 14:27:15.068838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-823635&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:27:16.326596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-823635&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:27:19.026736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-823635&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:27:24.752138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-823635&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1018 14:27:32.168784       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 14:27:32.168848       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 14:27:32.169004       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 14:27:32.189101       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 14:27:32.189152       1 server_linux.go:132] "Using iptables Proxier"
	I1018 14:27:32.194889       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 14:27:32.195323       1 server.go:527] "Version info" version="v1.34.1"
	I1018 14:27:32.195370       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 14:27:32.196813       1 config.go:309] "Starting node config controller"
	I1018 14:27:32.196835       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 14:27:32.196844       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 14:27:32.196859       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 14:27:32.196864       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 14:27:32.196893       1 config.go:200] "Starting service config controller"
	I1018 14:27:32.196900       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 14:27:32.196942       1 config.go:106] "Starting endpoint slice config controller"
	I1018 14:27:32.196949       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 14:27:32.297496       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 14:27:32.297525       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 14:27:32.297519       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [50c2de213fe05086fd0f202bc87d4794e5cf06d1e90bc2b581c33039db5afeb7] <==
	E1018 14:26:29.246389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:26:29.246436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:26:29.246477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 14:26:29.246538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 14:26:29.246590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 14:26:30.050359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:26:30.096722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 14:26:30.134148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 14:26:30.273298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 14:26:30.343340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 14:26:30.365835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 14:26:30.368882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:26:30.386971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:26:30.392044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:26:30.419056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 14:26:30.430286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:26:30.484589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:26:30.501822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1018 14:26:30.844191       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:27:14.845369       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 14:27:14.845432       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 14:27:14.845513       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 14:27:14.845540       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 14:27:14.845547       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 14:27:14.845571       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [65a173c29bb11fa7b34e191eaa9aeef81e739e6edee0571260868bfa4f411b94] <==
	E1018 14:27:20.428735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 14:27:20.442371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:27:20.466841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 14:27:20.492518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:27:20.968943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:27:23.394557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 14:27:23.960214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 14:27:24.072090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 14:27:24.135977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 14:27:24.269897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 14:27:24.309794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 14:27:24.354393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 14:27:24.434450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 14:27:24.800416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 14:27:24.947637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 14:27:25.005447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 14:27:25.043088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 14:27:25.138956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 14:27:25.523395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 14:27:25.798266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 14:27:26.001045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 14:27:26.051712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 14:27:26.794417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 14:27:26.836204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1018 14:27:35.488262       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 14:31:27 functional-823635 kubelet[4121]: E1018 14:31:27.995822    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-kvjxl" podUID="c0e2f316-9809-4f04-8b57-14e8eb1f0204"
	Oct 18 14:31:28 functional-823635 kubelet[4121]: E1018 14:31:28.996215    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w6k84" podUID="afc7f7cc-801e-451a-a546-408cff4e3833"
	Oct 18 14:32:30 functional-823635 kubelet[4121]: E1018 14:32:30.903541    4121 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 18 14:32:30 functional-823635 kubelet[4121]: E1018 14:32:30.903609    4121 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 18 14:32:30 functional-823635 kubelet[4121]: E1018 14:32:30.903819    4121 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(0197dc5f-60ed-4050-b6ee-2bb44454bbd7): ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 14:32:30 functional-823635 kubelet[4121]: E1018 14:32:30.903894    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0197dc5f-60ed-4050-b6ee-2bb44454bbd7"
	Oct 18 14:32:42 functional-823635 kubelet[4121]: E1018 14:32:42.996360    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0197dc5f-60ed-4050-b6ee-2bb44454bbd7"
	Oct 18 14:33:21 functional-823635 kubelet[4121]: I1018 14:33:21.061198    4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2h6r\" (UniqueName: \"kubernetes.io/projected/1ac6a078-09b2-482f-a9b4-406cb297f6c9-kube-api-access-t2h6r\") pod \"dashboard-metrics-scraper-77bf4d6c4c-xpl9m\" (UID: \"1ac6a078-09b2-482f-a9b4-406cb297f6c9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-xpl9m"
	Oct 18 14:33:21 functional-823635 kubelet[4121]: I1018 14:33:21.061306    4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/95c9f8bb-f768-4d5e-ba3b-dccc22757ed0-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-c7xfp\" (UID: \"95c9f8bb-f768-4d5e-ba3b-dccc22757ed0\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c7xfp"
	Oct 18 14:33:21 functional-823635 kubelet[4121]: I1018 14:33:21.061344    4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6btvd\" (UniqueName: \"kubernetes.io/projected/95c9f8bb-f768-4d5e-ba3b-dccc22757ed0-kube-api-access-6btvd\") pod \"kubernetes-dashboard-855c9754f9-c7xfp\" (UID: \"95c9f8bb-f768-4d5e-ba3b-dccc22757ed0\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c7xfp"
	Oct 18 14:33:21 functional-823635 kubelet[4121]: I1018 14:33:21.061375    4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1ac6a078-09b2-482f-a9b4-406cb297f6c9-tmp-volume\") pod \"dashboard-metrics-scraper-77bf4d6c4c-xpl9m\" (UID: \"1ac6a078-09b2-482f-a9b4-406cb297f6c9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-xpl9m"
	Oct 18 14:33:48 functional-823635 kubelet[4121]: E1018 14:33:48.783645    4121 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 18 14:33:48 functional-823635 kubelet[4121]: E1018 14:33:48.783717    4121 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 18 14:33:48 functional-823635 kubelet[4121]: E1018 14:33:48.783933    4121 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx-svc_default(ee39520e-b6d9-4fe9-824c-db1d0b2e661e): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 14:33:48 functional-823635 kubelet[4121]: E1018 14:33:48.784006    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ee39520e-b6d9-4fe9-824c-db1d0b2e661e"
	Oct 18 14:33:48 functional-823635 kubelet[4121]: E1018 14:33:48.784556    4121 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 18 14:33:48 functional-823635 kubelet[4121]: E1018 14:33:48.784594    4121 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 18 14:33:48 functional-823635 kubelet[4121]: E1018 14:33:48.784720    4121 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-kvjxl_default(c0e2f316-9809-4f04-8b57-14e8eb1f0204): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Oct 18 14:33:48 functional-823635 kubelet[4121]: E1018 14:33:48.785030    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-kvjxl" podUID="c0e2f316-9809-4f04-8b57-14e8eb1f0204"
	Oct 18 14:33:48 functional-823635 kubelet[4121]: E1018 14:33:48.785307    4121 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 18 14:33:48 functional-823635 kubelet[4121]: E1018 14:33:48.785349    4121 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 18 14:33:48 functional-823635 kubelet[4121]: E1018 14:33:48.785520    4121 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-w6k84_default(afc7f7cc-801e-451a-a546-408cff4e3833): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Oct 18 14:33:48 functional-823635 kubelet[4121]: E1018 14:33:48.786851    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w6k84" podUID="afc7f7cc-801e-451a-a546-408cff4e3833"
	Oct 18 14:33:58 functional-823635 kubelet[4121]: E1018 14:33:58.996366    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-kvjxl" podUID="c0e2f316-9809-4f04-8b57-14e8eb1f0204"
	Oct 18 14:34:00 functional-823635 kubelet[4121]: E1018 14:34:00.997534    4121 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ee39520e-b6d9-4fe9-824c-db1d0b2e661e"
	
	
	==> storage-provisioner [0973b81bb46303994a6ee55425593dc6f31b75283660128cdf4b2aca621b2db0] <==
	W1018 14:33:37.778828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:39.782692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:39.788131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:41.791000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:41.794880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:43.798399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:43.803518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:45.807156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:45.812600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:47.816834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:47.820846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:49.823686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:49.827682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:51.831334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:51.835721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:53.838948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:53.843448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:55.846793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:55.851568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:57.855282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:57.859559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:59.862718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:33:59.866803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:01.870555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:34:01.876082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [79e2f9ff59d010e391ea9ba1565688857cee3f6061e58fe816e0fd7bc5464d4c] <==
	I1018 14:26:49.443004       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-823635_bdc6698e-30cc-41ba-8640-be0d38b72921!
	W1018 14:26:51.350611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:51.356452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:53.359516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:53.363277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:55.366759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:55.371592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:57.374629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:57.378317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:59.381478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:26:59.385461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:01.388382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:01.392241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:03.395864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:03.400171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:05.403696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:05.407592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:07.410344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:07.415319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:09.418313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:09.422093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:11.425619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:11.431210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:13.434190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 14:27:13.437932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-823635 -n functional-823635
helpers_test.go:269: (dbg) Run:  kubectl --context functional-823635 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-kvjxl hello-node-connect-7d85dfc575-w6k84 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xpl9m kubernetes-dashboard-855c9754f9-c7xfp
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-823635 describe pod busybox-mount hello-node-75c85bcc94-kvjxl hello-node-connect-7d85dfc575-w6k84 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xpl9m kubernetes-dashboard-855c9754f9-c7xfp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-823635 describe pod busybox-mount hello-node-75c85bcc94-kvjxl hello-node-connect-7d85dfc575-w6k84 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xpl9m kubernetes-dashboard-855c9754f9-c7xfp: exit status 1 (93.616962ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-823635/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:27:57 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  cri-o://dd337385fed92e05f2a197d6d7595005b37fb5bf9065adcfef39a71b932525fe
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 18 Oct 2025 14:29:05 +0000
	      Finished:     Sat, 18 Oct 2025 14:29:05 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p6k4w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-p6k4w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m5s   default-scheduler  Successfully assigned default/busybox-mount to functional-823635
	  Normal  Pulling    6m2s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m57s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.888s (1m4.677s including waiting). Image size: 4631262 bytes.
	  Normal  Created    4m57s  kubelet            Created container: mount-munger
	  Normal  Started    4m57s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-kvjxl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-823635/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:29:12 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n5b64 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n5b64:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  4m50s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-kvjxl to functional-823635
	  Normal   Pulling    2m22s (x3 over 4m50s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     14s (x3 over 3m52s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     14s (x3 over 3m52s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4s (x3 over 3m51s)     kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4s (x3 over 3m51s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-w6k84
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-823635/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:28:11 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6szb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s6szb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m51s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w6k84 to functional-823635
	  Normal   BackOff    2m34s (x2 over 3m51s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2m34s (x2 over 3m51s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m19s (x3 over 5m51s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     14s (x3 over 3m52s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     14s (x3 over 3m52s)    kubelet            Error: ErrImagePull
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-823635/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:27:56 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b946b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-b946b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m6s                   default-scheduler  Successfully assigned default/nginx-svc to functional-823635
	  Warning  Failed     2m49s (x2 over 4m59s)  kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m24s (x3 over 6m6s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     14s (x3 over 4m59s)    kubelet            Error: ErrImagePull
	  Warning  Failed     14s                    kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2s (x3 over 4m58s)     kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2s (x3 over 4m58s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-823635/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 14:27:59 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fxjlk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-fxjlk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/sp-pod to functional-823635
	  Warning  Failed     3m52s                kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     92s (x2 over 3m52s)  kubelet            Error: ErrImagePull
	  Warning  Failed     92s                  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    80s (x2 over 3m51s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     80s (x2 over 3m51s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    68s (x3 over 6m2s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-xpl9m" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-c7xfp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-823635 describe pod busybox-mount hello-node-75c85bcc94-kvjxl hello-node-connect-7d85dfc575-w6k84 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xpl9m kubernetes-dashboard-855c9754f9-c7xfp: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (369.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-823635 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [ee39520e-b6d9-4fe9-824c-db1d0b2e661e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-823635 -n functional-823635
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-10-18 14:31:56.607110217 +0000 UTC m=+1017.114113181
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-823635 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-823635 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-823635/192.168.49.2
Start Time:       Sat, 18 Oct 2025 14:27:56 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b946b (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-b946b:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-823635
Warning  Failed     43s (x2 over 2m53s)  kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     43s (x2 over 2m53s)  kubelet            Error: ErrImagePull
Normal   BackOff    33s (x2 over 2m52s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     33s (x2 over 2m52s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    18s (x3 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-823635 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-823635 logs nginx-svc -n default: exit status 1 (65.842007ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-823635 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-823635 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-823635 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-kvjxl" [c0e2f316-9809-4f04-8b57-14e8eb1f0204] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1018 14:29:20.293635   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:30:42.215155   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-823635 -n functional-823635
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-18 14:39:12.872168229 +0000 UTC m=+1453.379171181
functional_test.go:1460: (dbg) Run:  kubectl --context functional-823635 describe po hello-node-75c85bcc94-kvjxl -n default
functional_test.go:1460: (dbg) kubectl --context functional-823635 describe po hello-node-75c85bcc94-kvjxl -n default:
Name:             hello-node-75c85bcc94-kvjxl
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-823635/192.168.49.2
Start Time:       Sat, 18 Oct 2025 14:29:12 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n5b64 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-n5b64:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-kvjxl to functional-823635
Warning  Failed     2m17s (x4 over 9m2s)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     2m17s (x4 over 9m2s)  kubelet            Error: ErrImagePull
Normal   BackOff    62s (x11 over 9m1s)   kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     62s (x11 over 9m1s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    49s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-823635 logs hello-node-75c85bcc94-kvjxl -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-823635 logs hello-node-75c85bcc94-kvjxl -n default: exit status 1 (64.847112ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-kvjxl" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-823635 logs hello-node-75c85bcc94-kvjxl -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (73.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1018 14:31:56.740108   93187 retry.go:31] will retry after 2.57267229s: Temporary Error: Get "http:": http: no Host in request URL
I1018 14:31:59.313455   93187 retry.go:31] will retry after 5.37253832s: Temporary Error: Get "http:": http: no Host in request URL
I1018 14:32:04.687104   93187 retry.go:31] will retry after 4.684280006s: Temporary Error: Get "http:": http: no Host in request URL
I1018 14:32:09.371855   93187 retry.go:31] will retry after 12.137618282s: Temporary Error: Get "http:": http: no Host in request URL
I1018 14:32:21.510064   93187 retry.go:31] will retry after 10.755418021s: Temporary Error: Get "http:": http: no Host in request URL
I1018 14:32:32.266071   93187 retry.go:31] will retry after 18.782883299s: Temporary Error: Get "http:": http: no Host in request URL
I1018 14:32:51.049276   93187 retry.go:31] will retry after 19.197738966s: Temporary Error: Get "http:": http: no Host in request URL
E1018 14:32:58.354843   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-823635 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
nginx-svc   LoadBalancer   10.102.212.182   10.102.212.182   80:32726/TCP   5m14s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (73.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 image load --daemon kicbase/echo-server:functional-823635 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-823635" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 image load --daemon kicbase/echo-server:functional-823635 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-823635" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-823635
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 image load --daemon kicbase/echo-server:functional-823635 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-823635" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 image save kicbase/echo-server:functional-823635 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1018 14:33:19.171448  137337 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:33:19.171575  137337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:33:19.171584  137337 out.go:374] Setting ErrFile to fd 2...
	I1018 14:33:19.171588  137337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:33:19.171803  137337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:33:19.172396  137337 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:33:19.172482  137337 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:33:19.172862  137337 cli_runner.go:164] Run: docker container inspect functional-823635 --format={{.State.Status}}
	I1018 14:33:19.190275  137337 ssh_runner.go:195] Run: systemctl --version
	I1018 14:33:19.190347  137337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-823635
	I1018 14:33:19.208136  137337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/functional-823635/id_rsa Username:docker}
	I1018 14:33:19.302940  137337 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1018 14:33:19.303028  137337 cache_images.go:254] Failed to load cached images for "functional-823635": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1018 14:33:19.303082  137337 cache_images.go:266] failed pushing to: functional-823635

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-823635
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 image save --daemon kicbase/echo-server:functional-823635 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-823635
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-823635: exit status 1 (16.792157ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-823635

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-823635

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-823635 service --namespace=default --https --url hello-node: exit status 115 (522.036115ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31992
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-823635 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-823635 service hello-node --url --format={{.IP}}: exit status 115 (524.101089ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-823635 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-823635 service hello-node --url: exit status 115 (525.521269ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31992
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-823635 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31992
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.38s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-822582 --output=json --user=testUser
E1018 14:47:58.354647   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-822582 --output=json --user=testUser: exit status 80 (2.378125922s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"533fd8ef-2473-46ae-b683-0ec1ec6e7be2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-822582 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"29343830-5e48-4666-b88c-2f10b52a8ec2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-18T14:47:59Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"0837a45b-b91a-4fa8-94c7-4caac42fb3a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-822582 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.38s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.03s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-822582 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-822582 --output=json --user=testUser: exit status 80 (2.02755749s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"29201404-7256-481d-a29a-1ed11d6f0167","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-822582 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"08e5dfef-7618-466a-8eab-9e7fc57852fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-18T14:48:01Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"6b4bf02c-6bdb-43e1-b82f-3e978ccb20fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-822582 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.03s)

                                                
                                    
x
+
TestPause/serial/Pause (5.81s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-552434 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-552434 --alsologtostderr -v=5: exit status 80 (1.792158941s)

                                                
                                                
-- stdout --
	* Pausing node pause-552434 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 15:02:38.477302  303236 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:02:38.477556  303236 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:02:38.477565  303236 out.go:374] Setting ErrFile to fd 2...
	I1018 15:02:38.477569  303236 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:02:38.477793  303236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:02:38.478040  303236 out.go:368] Setting JSON to false
	I1018 15:02:38.478085  303236 mustload.go:65] Loading cluster: pause-552434
	I1018 15:02:38.478511  303236 config.go:182] Loaded profile config "pause-552434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:02:38.478888  303236 cli_runner.go:164] Run: docker container inspect pause-552434 --format={{.State.Status}}
	I1018 15:02:38.497468  303236 host.go:66] Checking if "pause-552434" exists ...
	I1018 15:02:38.497756  303236 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:02:38.558193  303236 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-18 15:02:38.54715363 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:02:38.559015  303236 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-552434 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 15:02:38.560792  303236 out.go:179] * Pausing node pause-552434 ... 
	I1018 15:02:38.561971  303236 host.go:66] Checking if "pause-552434" exists ...
	I1018 15:02:38.562231  303236 ssh_runner.go:195] Run: systemctl --version
	I1018 15:02:38.562267  303236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-552434
	I1018 15:02:38.580904  303236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/pause-552434/id_rsa Username:docker}
	I1018 15:02:38.678485  303236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:02:38.693310  303236 pause.go:52] kubelet running: true
	I1018 15:02:38.693461  303236 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:02:38.836697  303236 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:02:38.836783  303236 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:02:38.905256  303236 cri.go:89] found id: "83e63972ae32f350d8fe81115bad361203b44a85b2d3034e9cffa0da575020a6"
	I1018 15:02:38.905284  303236 cri.go:89] found id: "e38d3fecd94dce09c272919a6b3a4893b97e5e77a057ca49c43c1ef2d6a20b6a"
	I1018 15:02:38.905291  303236 cri.go:89] found id: "15f33accd9028684dd1a5553833f500450d0eb21fbbe9b87813fa2c338823e84"
	I1018 15:02:38.905295  303236 cri.go:89] found id: "f07b384914dac3fb4353daeb697eeceec6d1144cacc371aa0b6ae1c6306ecbf2"
	I1018 15:02:38.905300  303236 cri.go:89] found id: "309d2698f21c03ab93a63ac602bee45208d646cc594c6c20bf1b7fde417c6c8e"
	I1018 15:02:38.905303  303236 cri.go:89] found id: "9b700ddbcc14c420cbc0df381ae301fef7d682c6ca669b88a2fa3253285eb9bf"
	I1018 15:02:38.905307  303236 cri.go:89] found id: "ef0a486a7b8571324c6d2501c8769bc1892c74983b0a30f02f7e96ff6327aaad"
	I1018 15:02:38.905311  303236 cri.go:89] found id: ""
	I1018 15:02:38.905372  303236 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:02:38.918759  303236 retry.go:31] will retry after 307.796854ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:02:38Z" level=error msg="open /run/runc: no such file or directory"
	I1018 15:02:39.227260  303236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:02:39.240647  303236 pause.go:52] kubelet running: false
	I1018 15:02:39.240743  303236 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:02:39.353557  303236 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:02:39.353626  303236 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:02:39.421958  303236 cri.go:89] found id: "83e63972ae32f350d8fe81115bad361203b44a85b2d3034e9cffa0da575020a6"
	I1018 15:02:39.421986  303236 cri.go:89] found id: "e38d3fecd94dce09c272919a6b3a4893b97e5e77a057ca49c43c1ef2d6a20b6a"
	I1018 15:02:39.421992  303236 cri.go:89] found id: "15f33accd9028684dd1a5553833f500450d0eb21fbbe9b87813fa2c338823e84"
	I1018 15:02:39.421996  303236 cri.go:89] found id: "f07b384914dac3fb4353daeb697eeceec6d1144cacc371aa0b6ae1c6306ecbf2"
	I1018 15:02:39.422001  303236 cri.go:89] found id: "309d2698f21c03ab93a63ac602bee45208d646cc594c6c20bf1b7fde417c6c8e"
	I1018 15:02:39.422005  303236 cri.go:89] found id: "9b700ddbcc14c420cbc0df381ae301fef7d682c6ca669b88a2fa3253285eb9bf"
	I1018 15:02:39.422046  303236 cri.go:89] found id: "ef0a486a7b8571324c6d2501c8769bc1892c74983b0a30f02f7e96ff6327aaad"
	I1018 15:02:39.422057  303236 cri.go:89] found id: ""
	I1018 15:02:39.422112  303236 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:02:39.434552  303236 retry.go:31] will retry after 561.340634ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:02:39Z" level=error msg="open /run/runc: no such file or directory"
	I1018 15:02:39.996061  303236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:02:40.008626  303236 pause.go:52] kubelet running: false
	I1018 15:02:40.008707  303236 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:02:40.122659  303236 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:02:40.122744  303236 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:02:40.195657  303236 cri.go:89] found id: "83e63972ae32f350d8fe81115bad361203b44a85b2d3034e9cffa0da575020a6"
	I1018 15:02:40.195686  303236 cri.go:89] found id: "e38d3fecd94dce09c272919a6b3a4893b97e5e77a057ca49c43c1ef2d6a20b6a"
	I1018 15:02:40.195692  303236 cri.go:89] found id: "15f33accd9028684dd1a5553833f500450d0eb21fbbe9b87813fa2c338823e84"
	I1018 15:02:40.195696  303236 cri.go:89] found id: "f07b384914dac3fb4353daeb697eeceec6d1144cacc371aa0b6ae1c6306ecbf2"
	I1018 15:02:40.195699  303236 cri.go:89] found id: "309d2698f21c03ab93a63ac602bee45208d646cc594c6c20bf1b7fde417c6c8e"
	I1018 15:02:40.195703  303236 cri.go:89] found id: "9b700ddbcc14c420cbc0df381ae301fef7d682c6ca669b88a2fa3253285eb9bf"
	I1018 15:02:40.195706  303236 cri.go:89] found id: "ef0a486a7b8571324c6d2501c8769bc1892c74983b0a30f02f7e96ff6327aaad"
	I1018 15:02:40.195710  303236 cri.go:89] found id: ""
	I1018 15:02:40.195777  303236 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:02:40.210806  303236 out.go:203] 
	W1018 15:02:40.212006  303236 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:02:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:02:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 15:02:40.212025  303236 out.go:285] * 
	* 
	W1018 15:02:40.217742  303236 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 15:02:40.219165  303236 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-552434 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-552434
helpers_test.go:243: (dbg) docker inspect pause-552434:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f6579505a3b2004fab034e191688657e1ce18377911586a8a9ccdde1fc41a551",
	        "Created": "2025-10-18T15:01:27.758869286Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 285828,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T15:01:27.828554989Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/f6579505a3b2004fab034e191688657e1ce18377911586a8a9ccdde1fc41a551/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f6579505a3b2004fab034e191688657e1ce18377911586a8a9ccdde1fc41a551/hostname",
	        "HostsPath": "/var/lib/docker/containers/f6579505a3b2004fab034e191688657e1ce18377911586a8a9ccdde1fc41a551/hosts",
	        "LogPath": "/var/lib/docker/containers/f6579505a3b2004fab034e191688657e1ce18377911586a8a9ccdde1fc41a551/f6579505a3b2004fab034e191688657e1ce18377911586a8a9ccdde1fc41a551-json.log",
	        "Name": "/pause-552434",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-552434:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-552434",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f6579505a3b2004fab034e191688657e1ce18377911586a8a9ccdde1fc41a551",
	                "LowerDir": "/var/lib/docker/overlay2/6fb06321da58a31f868fc65dcd15f68403adaee09c85080f07ca7f60592006b1-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6fb06321da58a31f868fc65dcd15f68403adaee09c85080f07ca7f60592006b1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6fb06321da58a31f868fc65dcd15f68403adaee09c85080f07ca7f60592006b1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6fb06321da58a31f868fc65dcd15f68403adaee09c85080f07ca7f60592006b1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-552434",
	                "Source": "/var/lib/docker/volumes/pause-552434/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-552434",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-552434",
	                "name.minikube.sigs.k8s.io": "pause-552434",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "29900e711d7824d6bfa6d53d7d115550dfd04199eb809588c99c4aef4385f550",
	            "SandboxKey": "/var/run/docker/netns/29900e711d78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33003"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33004"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33007"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33005"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33006"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-552434": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:2e:65:b2:ea:48",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "51c247ddc726a0b307ab7cffda8e1a5a54da0b1db9b37c1506ebaa8b40b84775",
	                    "EndpointID": "d7acf4088ea5aedf676fd01131878477619e6d7c2e65647f2a436acda6601843",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-552434",
	                        "f6579505a3b2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-552434 -n pause-552434
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-552434 -n pause-552434: exit status 2 (374.896346ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-552434 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-552434 logs -n 25: (1.189901451s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-034446 sudo systemctl cat docker --no-pager                                                                       │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:01 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo cat /etc/docker/daemon.json                                                                           │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo docker system info                                                                                    │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo cri-dockerd --version                                                                                 │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo systemctl cat containerd --no-pager                                                                   │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo cat /etc/containerd/config.toml                                                                       │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo containerd config dump                                                                                │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo systemctl cat crio --no-pager                                                                         │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo crio config                                                                                           │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ delete  │ -p cilium-034446                                                                                                            │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ start   │ -p force-systemd-flag-536692 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-536692 │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ ssh     │ -p NoKubernetes-286873 sudo systemctl is-active --quiet service kubelet                                                     │ NoKubernetes-286873       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ force-systemd-flag-536692 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                        │ force-systemd-flag-536692 │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ delete  │ -p force-systemd-flag-536692                                                                                                │ force-systemd-flag-536692 │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ start   │ -p force-systemd-env-680592 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-680592  │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ start   │ -p pause-552434 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-552434              │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ pause   │ -p pause-552434 --alsologtostderr -v=5                                                                                      │ pause-552434              │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:02:32
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:02:32.618680  301324 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:02:32.618789  301324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:02:32.618798  301324 out.go:374] Setting ErrFile to fd 2...
	I1018 15:02:32.618803  301324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:02:32.619018  301324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:02:32.619431  301324 out.go:368] Setting JSON to false
	I1018 15:02:32.620699  301324 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9904,"bootTime":1760789849,"procs":433,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:02:32.620798  301324 start.go:141] virtualization: kvm guest
	I1018 15:02:32.622839  301324 out.go:179] * [pause-552434] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:02:32.624050  301324 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:02:32.624084  301324 notify.go:220] Checking for updates...
	I1018 15:02:32.626413  301324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:02:32.627605  301324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:02:32.629302  301324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 15:02:32.630469  301324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:02:32.632197  301324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:02:32.633996  301324 config.go:182] Loaded profile config "pause-552434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:02:32.634493  301324 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:02:32.659644  301324 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:02:32.659754  301324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:02:32.718536  301324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-18 15:02:32.707134425 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:02:32.718688  301324 docker.go:318] overlay module found
	I1018 15:02:32.720215  301324 out.go:179] * Using the docker driver based on existing profile
	I1018 15:02:32.721207  301324 start.go:305] selected driver: docker
	I1018 15:02:32.721223  301324 start.go:925] validating driver "docker" against &{Name:pause-552434 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-552434 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:02:32.721351  301324 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:02:32.721450  301324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:02:32.779261  301324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-18 15:02:32.76947479 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:02:32.780186  301324 cni.go:84] Creating CNI manager for ""
	I1018 15:02:32.780266  301324 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:02:32.780348  301324 start.go:349] cluster config:
	{Name:pause-552434 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-552434 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:02:32.782310  301324 out.go:179] * Starting "pause-552434" primary control-plane node in "pause-552434" cluster
	I1018 15:02:32.783463  301324 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:02:32.784542  301324 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:02:32.785512  301324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:02:32.785549  301324 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 15:02:32.785559  301324 cache.go:58] Caching tarball of preloaded images
	I1018 15:02:32.785618  301324 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 15:02:32.785648  301324 preload.go:233] Found /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 15:02:32.785663  301324 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 15:02:32.785837  301324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434/config.json ...
	I1018 15:02:32.807882  301324 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:02:32.807904  301324 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 15:02:32.807941  301324 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:02:32.807973  301324 start.go:360] acquireMachinesLock for pause-552434: {Name:mk29a6cd7adf94a55f4554653c9a38077fac2a1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:02:32.808035  301324 start.go:364] duration metric: took 39.133µs to acquireMachinesLock for "pause-552434"
	I1018 15:02:32.808066  301324 start.go:96] Skipping create...Using existing machine configuration
	I1018 15:02:32.808076  301324 fix.go:54] fixHost starting: 
	I1018 15:02:32.808319  301324 cli_runner.go:164] Run: docker container inspect pause-552434 --format={{.State.Status}}
	I1018 15:02:32.826737  301324 fix.go:112] recreateIfNeeded on pause-552434: state=Running err=<nil>
	W1018 15:02:32.826775  301324 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 15:02:29.979841  299500 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-680592:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.396844866s)
	I1018 15:02:29.979877  299500 kic.go:203] duration metric: took 4.397026123s to extract preloaded images to volume ...
	W1018 15:02:29.979993  299500 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 15:02:29.980040  299500 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 15:02:29.980090  299500 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 15:02:30.036934  299500 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-680592 --name force-systemd-env-680592 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-680592 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-680592 --network force-systemd-env-680592 --ip 192.168.103.2 --volume force-systemd-env-680592:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 15:02:30.313950  299500 cli_runner.go:164] Run: docker container inspect force-systemd-env-680592 --format={{.State.Running}}
	I1018 15:02:30.332320  299500 cli_runner.go:164] Run: docker container inspect force-systemd-env-680592 --format={{.State.Status}}
	I1018 15:02:30.350008  299500 cli_runner.go:164] Run: docker exec force-systemd-env-680592 stat /var/lib/dpkg/alternatives/iptables
	I1018 15:02:30.395589  299500 oci.go:144] the created container "force-systemd-env-680592" has a running status.
	I1018 15:02:30.395621  299500 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/force-systemd-env-680592/id_rsa...
	I1018 15:02:30.772193  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/force-systemd-env-680592/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1018 15:02:30.772243  299500 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-89690/.minikube/machines/force-systemd-env-680592/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 15:02:30.803586  299500 cli_runner.go:164] Run: docker container inspect force-systemd-env-680592 --format={{.State.Status}}
	I1018 15:02:30.826984  299500 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 15:02:30.827013  299500 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-680592 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 15:02:30.885221  299500 cli_runner.go:164] Run: docker container inspect force-systemd-env-680592 --format={{.State.Status}}
	I1018 15:02:30.906557  299500 machine.go:93] provisionDockerMachine start ...
	I1018 15:02:30.906655  299500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-680592
	I1018 15:02:30.930049  299500 main.go:141] libmachine: Using SSH client type: native
	I1018 15:02:30.930498  299500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1018 15:02:30.930520  299500 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:02:31.071284  299500 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-680592
	
	I1018 15:02:31.071334  299500 ubuntu.go:182] provisioning hostname "force-systemd-env-680592"
	I1018 15:02:31.071404  299500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-680592
	I1018 15:02:31.091015  299500 main.go:141] libmachine: Using SSH client type: native
	I1018 15:02:31.091267  299500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1018 15:02:31.091286  299500 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-680592 && echo "force-systemd-env-680592" | sudo tee /etc/hostname
	I1018 15:02:31.237252  299500 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-680592
	
	I1018 15:02:31.237354  299500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-680592
	I1018 15:02:31.257964  299500 main.go:141] libmachine: Using SSH client type: native
	I1018 15:02:31.258298  299500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1018 15:02:31.258330  299500 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-680592' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-680592/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-680592' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:02:31.398471  299500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:02:31.398510  299500 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 15:02:31.398559  299500 ubuntu.go:190] setting up certificates
	I1018 15:02:31.398579  299500 provision.go:84] configureAuth start
	I1018 15:02:31.398633  299500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-680592
	I1018 15:02:31.418097  299500 provision.go:143] copyHostCerts
	I1018 15:02:31.418140  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:02:31.418180  299500 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem, removing ...
	I1018 15:02:31.418192  299500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:02:31.418270  299500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 15:02:31.418372  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:02:31.418400  299500 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem, removing ...
	I1018 15:02:31.418407  299500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:02:31.418456  299500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 15:02:31.418533  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:02:31.418558  299500 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem, removing ...
	I1018 15:02:31.418564  299500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:02:31.418604  299500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 15:02:31.418678  299500 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-680592 san=[127.0.0.1 192.168.103.2 force-systemd-env-680592 localhost minikube]
	I1018 15:02:31.843440  299500 provision.go:177] copyRemoteCerts
	I1018 15:02:31.843519  299500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:02:31.843569  299500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-680592
	I1018 15:02:31.861158  299500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/force-systemd-env-680592/id_rsa Username:docker}
	I1018 15:02:31.958328  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 15:02:31.958404  299500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:02:31.977993  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 15:02:31.978065  299500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1018 15:02:31.996147  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 15:02:31.996230  299500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 15:02:32.014141  299500 provision.go:87] duration metric: took 615.545088ms to configureAuth
	I1018 15:02:32.014170  299500 ubuntu.go:206] setting minikube options for container-runtime
	I1018 15:02:32.014355  299500 config.go:182] Loaded profile config "force-systemd-env-680592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:02:32.014471  299500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-680592
	I1018 15:02:32.031955  299500 main.go:141] libmachine: Using SSH client type: native
	I1018 15:02:32.032201  299500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1018 15:02:32.032223  299500 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:02:32.277159  299500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:02:32.277188  299500 machine.go:96] duration metric: took 1.37060755s to provisionDockerMachine
	I1018 15:02:32.277202  299500 client.go:171] duration metric: took 7.229547514s to LocalClient.Create
	I1018 15:02:32.277220  299500 start.go:167] duration metric: took 7.229606992s to libmachine.API.Create "force-systemd-env-680592"
	I1018 15:02:32.277228  299500 start.go:293] postStartSetup for "force-systemd-env-680592" (driver="docker")
	I1018 15:02:32.277241  299500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:02:32.277302  299500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:02:32.277343  299500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-680592
	I1018 15:02:32.296005  299500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/force-systemd-env-680592/id_rsa Username:docker}
	I1018 15:02:32.395273  299500 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:02:32.399038  299500 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 15:02:32.399067  299500 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 15:02:32.399079  299500 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/addons for local assets ...
	I1018 15:02:32.399128  299500 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/files for local assets ...
	I1018 15:02:32.399201  299500 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem -> 931872.pem in /etc/ssl/certs
	I1018 15:02:32.399209  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem -> /etc/ssl/certs/931872.pem
	I1018 15:02:32.399291  299500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:02:32.407509  299500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:02:32.429174  299500 start.go:296] duration metric: took 151.929641ms for postStartSetup
	I1018 15:02:32.429556  299500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-680592
	I1018 15:02:32.447339  299500 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/config.json ...
	I1018 15:02:32.447687  299500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 15:02:32.447749  299500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-680592
	I1018 15:02:32.464885  299500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/force-systemd-env-680592/id_rsa Username:docker}
	I1018 15:02:32.561403  299500 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 15:02:32.566201  299500 start.go:128] duration metric: took 7.52083747s to createHost
	I1018 15:02:32.566221  299500 start.go:83] releasing machines lock for "force-systemd-env-680592", held for 7.520968995s
	I1018 15:02:32.566287  299500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-680592
	I1018 15:02:32.586815  299500 ssh_runner.go:195] Run: cat /version.json
	I1018 15:02:32.586846  299500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:02:32.586860  299500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-680592
	I1018 15:02:32.586937  299500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-680592
	I1018 15:02:32.607697  299500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/force-systemd-env-680592/id_rsa Username:docker}
	I1018 15:02:32.607809  299500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/force-systemd-env-680592/id_rsa Username:docker}
	I1018 15:02:32.759948  299500 ssh_runner.go:195] Run: systemctl --version
	I1018 15:02:32.767905  299500 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:02:32.807173  299500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:02:32.812701  299500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:02:32.812765  299500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:02:32.840432  299500 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 15:02:32.840457  299500 start.go:495] detecting cgroup driver to use...
	I1018 15:02:32.840479  299500 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1018 15:02:32.840533  299500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:02:32.858469  299500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:02:32.871396  299500 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:02:32.871448  299500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:02:32.888419  299500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:02:32.907183  299500 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:02:32.994350  299500 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:02:33.086528  299500 docker.go:234] disabling docker service ...
	I1018 15:02:33.086599  299500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:02:33.104641  299500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:02:33.117604  299500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:02:33.204879  299500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:02:33.292465  299500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:02:33.305135  299500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:02:33.320249  299500 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 15:02:33.320307  299500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:02:33.331287  299500 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 15:02:33.331352  299500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:02:33.340427  299500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:02:33.349332  299500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:02:33.357985  299500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:02:33.366476  299500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:02:33.375628  299500 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:02:33.389428  299500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:02:33.398190  299500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:02:33.405863  299500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:02:33.414060  299500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:02:33.498025  299500 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:02:33.603636  299500 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:02:33.603705  299500 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:02:33.607941  299500 start.go:563] Will wait 60s for crictl version
	I1018 15:02:33.608003  299500 ssh_runner.go:195] Run: which crictl
	I1018 15:02:33.611761  299500 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 15:02:33.636938  299500 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 15:02:33.637013  299500 ssh_runner.go:195] Run: crio --version
	I1018 15:02:33.666723  299500 ssh_runner.go:195] Run: crio --version
	I1018 15:02:33.698940  299500 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 15:02:30.780243  278049 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 15:02:30.780687  278049 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1018 15:02:30.780746  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 15:02:30.780810  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 15:02:30.816151  278049 cri.go:89] found id: "9fd193852421eda1fa8af182d3f4e060e771279c135421267bf2b3e8351b4e9b"
	I1018 15:02:30.816193  278049 cri.go:89] found id: ""
	I1018 15:02:30.816205  278049 logs.go:282] 1 containers: [9fd193852421eda1fa8af182d3f4e060e771279c135421267bf2b3e8351b4e9b]
	I1018 15:02:30.816271  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:02:30.821395  278049 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 15:02:30.821484  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 15:02:30.860370  278049 cri.go:89] found id: ""
	I1018 15:02:30.860398  278049 logs.go:282] 0 containers: []
	W1018 15:02:30.860409  278049 logs.go:284] No container was found matching "etcd"
	I1018 15:02:30.860416  278049 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 15:02:30.860484  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 15:02:30.895437  278049 cri.go:89] found id: ""
	I1018 15:02:30.895467  278049 logs.go:282] 0 containers: []
	W1018 15:02:30.895476  278049 logs.go:284] No container was found matching "coredns"
	I1018 15:02:30.895482  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 15:02:30.895546  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 15:02:30.933480  278049 cri.go:89] found id: "3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:02:30.933509  278049 cri.go:89] found id: ""
	I1018 15:02:30.933521  278049 logs.go:282] 1 containers: [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011]
	I1018 15:02:30.933586  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:02:30.938712  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 15:02:30.938784  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 15:02:30.969086  278049 cri.go:89] found id: ""
	I1018 15:02:30.969116  278049 logs.go:282] 0 containers: []
	W1018 15:02:30.969127  278049 logs.go:284] No container was found matching "kube-proxy"
	I1018 15:02:30.969144  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 15:02:30.969204  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 15:02:31.000672  278049 cri.go:89] found id: "2c7da3e123fe889df18ea7aa9245556c2393bbb48c6b8d832286db1eeafa4b7a"
	I1018 15:02:31.000693  278049 cri.go:89] found id: ""
	I1018 15:02:31.000701  278049 logs.go:282] 1 containers: [2c7da3e123fe889df18ea7aa9245556c2393bbb48c6b8d832286db1eeafa4b7a]
	I1018 15:02:31.000760  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:02:31.004983  278049 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 15:02:31.005058  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 15:02:31.035740  278049 cri.go:89] found id: ""
	I1018 15:02:31.035766  278049 logs.go:282] 0 containers: []
	W1018 15:02:31.035773  278049 logs.go:284] No container was found matching "kindnet"
	I1018 15:02:31.035780  278049 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 15:02:31.035828  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 15:02:31.065573  278049 cri.go:89] found id: ""
	I1018 15:02:31.065602  278049 logs.go:282] 0 containers: []
	W1018 15:02:31.065614  278049 logs.go:284] No container was found matching "storage-provisioner"
	I1018 15:02:31.065626  278049 logs.go:123] Gathering logs for CRI-O ...
	I1018 15:02:31.065642  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 15:02:31.111272  278049 logs.go:123] Gathering logs for container status ...
	I1018 15:02:31.111304  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 15:02:31.143138  278049 logs.go:123] Gathering logs for kubelet ...
	I1018 15:02:31.143175  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 15:02:31.209290  278049 logs.go:123] Gathering logs for dmesg ...
	I1018 15:02:31.209328  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 15:02:31.225415  278049 logs.go:123] Gathering logs for describe nodes ...
	I1018 15:02:31.225444  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 15:02:31.286752  278049 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 15:02:31.286779  278049 logs.go:123] Gathering logs for kube-apiserver [9fd193852421eda1fa8af182d3f4e060e771279c135421267bf2b3e8351b4e9b] ...
	I1018 15:02:31.286794  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fd193852421eda1fa8af182d3f4e060e771279c135421267bf2b3e8351b4e9b"
	I1018 15:02:31.324222  278049 logs.go:123] Gathering logs for kube-scheduler [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011] ...
	I1018 15:02:31.324257  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:02:31.366432  278049 logs.go:123] Gathering logs for kube-controller-manager [2c7da3e123fe889df18ea7aa9245556c2393bbb48c6b8d832286db1eeafa4b7a] ...
	I1018 15:02:31.366467  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2c7da3e123fe889df18ea7aa9245556c2393bbb48c6b8d832286db1eeafa4b7a"
	I1018 15:02:33.893973  278049 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 15:02:33.894424  278049 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1018 15:02:33.894503  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 15:02:33.894575  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 15:02:33.926280  278049 cri.go:89] found id: "9fd193852421eda1fa8af182d3f4e060e771279c135421267bf2b3e8351b4e9b"
	I1018 15:02:33.926306  278049 cri.go:89] found id: ""
	I1018 15:02:33.926317  278049 logs.go:282] 1 containers: [9fd193852421eda1fa8af182d3f4e060e771279c135421267bf2b3e8351b4e9b]
	I1018 15:02:33.926386  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:02:33.931303  278049 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 15:02:33.931381  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 15:02:33.966704  278049 cri.go:89] found id: ""
	I1018 15:02:33.966736  278049 logs.go:282] 0 containers: []
	W1018 15:02:33.966748  278049 logs.go:284] No container was found matching "etcd"
	I1018 15:02:33.966756  278049 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 15:02:33.966819  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 15:02:33.996484  278049 cri.go:89] found id: ""
	I1018 15:02:33.996514  278049 logs.go:282] 0 containers: []
	W1018 15:02:33.996526  278049 logs.go:284] No container was found matching "coredns"
	I1018 15:02:33.996541  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 15:02:33.996612  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 15:02:34.025899  278049 cri.go:89] found id: "3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:02:34.025947  278049 cri.go:89] found id: ""
	I1018 15:02:34.025959  278049 logs.go:282] 1 containers: [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011]
	I1018 15:02:34.026040  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:02:34.030119  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 15:02:34.030197  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 15:02:34.061524  278049 cri.go:89] found id: ""
	I1018 15:02:34.061552  278049 logs.go:282] 0 containers: []
	W1018 15:02:34.061562  278049 logs.go:284] No container was found matching "kube-proxy"
	I1018 15:02:34.061570  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 15:02:34.061626  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 15:02:34.091616  278049 cri.go:89] found id: "2c7da3e123fe889df18ea7aa9245556c2393bbb48c6b8d832286db1eeafa4b7a"
	I1018 15:02:34.091637  278049 cri.go:89] found id: ""
	I1018 15:02:34.091645  278049 logs.go:282] 1 containers: [2c7da3e123fe889df18ea7aa9245556c2393bbb48c6b8d832286db1eeafa4b7a]
	I1018 15:02:34.091701  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:02:34.095899  278049 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 15:02:34.095977  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 15:02:34.125449  278049 cri.go:89] found id: ""
	I1018 15:02:34.125477  278049 logs.go:282] 0 containers: []
	W1018 15:02:34.125485  278049 logs.go:284] No container was found matching "kindnet"
	I1018 15:02:34.125490  278049 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 15:02:34.125544  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 15:02:34.156230  278049 cri.go:89] found id: ""
	I1018 15:02:34.156258  278049 logs.go:282] 0 containers: []
	W1018 15:02:34.156269  278049 logs.go:284] No container was found matching "storage-provisioner"
	I1018 15:02:34.156280  278049 logs.go:123] Gathering logs for kube-apiserver [9fd193852421eda1fa8af182d3f4e060e771279c135421267bf2b3e8351b4e9b] ...
	I1018 15:02:34.156295  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fd193852421eda1fa8af182d3f4e060e771279c135421267bf2b3e8351b4e9b"
	I1018 15:02:34.191424  278049 logs.go:123] Gathering logs for kube-scheduler [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011] ...
	I1018 15:02:34.191456  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:02:34.236640  278049 logs.go:123] Gathering logs for kube-controller-manager [2c7da3e123fe889df18ea7aa9245556c2393bbb48c6b8d832286db1eeafa4b7a] ...
	I1018 15:02:34.236676  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2c7da3e123fe889df18ea7aa9245556c2393bbb48c6b8d832286db1eeafa4b7a"
	I1018 15:02:34.264417  278049 logs.go:123] Gathering logs for CRI-O ...
	I1018 15:02:34.264447  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 15:02:34.313076  278049 logs.go:123] Gathering logs for container status ...
	I1018 15:02:34.313125  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 15:02:34.350848  278049 logs.go:123] Gathering logs for kubelet ...
	I1018 15:02:34.350877  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 15:02:33.700166  299500 cli_runner.go:164] Run: docker network inspect force-systemd-env-680592 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:02:33.717827  299500 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 15:02:33.721840  299500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:02:33.732601  299500 kubeadm.go:883] updating cluster {Name:force-systemd-env-680592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-680592 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 15:02:33.732749  299500 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:02:33.732803  299500 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:02:33.766338  299500 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:02:33.766366  299500 crio.go:433] Images already preloaded, skipping extraction
	I1018 15:02:33.766422  299500 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:02:33.792346  299500 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:02:33.792374  299500 cache_images.go:85] Images are preloaded, skipping loading
	I1018 15:02:33.792384  299500 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1018 15:02:33.792487  299500 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-680592 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-680592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 15:02:33.792570  299500 ssh_runner.go:195] Run: crio config
	I1018 15:02:33.840708  299500 cni.go:84] Creating CNI manager for ""
	I1018 15:02:33.840733  299500 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:02:33.840753  299500 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 15:02:33.840782  299500 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-680592 NodeName:force-systemd-env-680592 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 15:02:33.840986  299500 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-680592"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 15:02:33.841069  299500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 15:02:33.849568  299500 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 15:02:33.849638  299500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 15:02:33.858042  299500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1018 15:02:33.871838  299500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 15:02:33.888129  299500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1018 15:02:33.902934  299500 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 15:02:33.907528  299500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:02:33.920308  299500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:02:34.013760  299500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:02:34.037870  299500 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592 for IP: 192.168.103.2
	I1018 15:02:34.037891  299500 certs.go:195] generating shared ca certs ...
	I1018 15:02:34.037946  299500 certs.go:227] acquiring lock for ca certs: {Name:mk6d3b4e44856f498157352ffe8bb89d2c7a3998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:02:34.038104  299500 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key
	I1018 15:02:34.038158  299500 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key
	I1018 15:02:34.038175  299500 certs.go:257] generating profile certs ...
	I1018 15:02:34.038243  299500 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/client.key
	I1018 15:02:34.038266  299500 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/client.crt with IP's: []
	I1018 15:02:34.502668  299500 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/client.crt ...
	I1018 15:02:34.502698  299500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/client.crt: {Name:mk8b2a83d6f4106d6f76cc3ae13cf05c57b3d050 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:02:34.502875  299500 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/client.key ...
	I1018 15:02:34.502889  299500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/client.key: {Name:mkbda56da2d18a654068830553efa63d43032f5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:02:34.503016  299500 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/apiserver.key.3b9b718e
	I1018 15:02:34.503036  299500 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/apiserver.crt.3b9b718e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1018 15:02:34.565524  299500 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/apiserver.crt.3b9b718e ...
	I1018 15:02:34.565554  299500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/apiserver.crt.3b9b718e: {Name:mk1235e90deda2d78578c0a90cc16ef78d60a87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:02:34.565745  299500 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/apiserver.key.3b9b718e ...
	I1018 15:02:34.565764  299500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/apiserver.key.3b9b718e: {Name:mk70d850ac2dd0b39f8e3d5c006ff585818665d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:02:34.565874  299500 certs.go:382] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/apiserver.crt.3b9b718e -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/apiserver.crt
	I1018 15:02:34.566004  299500 certs.go:386] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/apiserver.key.3b9b718e -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/apiserver.key
	I1018 15:02:34.566084  299500 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/proxy-client.key
	I1018 15:02:34.566100  299500 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/proxy-client.crt with IP's: []
	I1018 15:02:32.828405  301324 out.go:252] * Updating the running docker "pause-552434" container ...
	I1018 15:02:32.828437  301324 machine.go:93] provisionDockerMachine start ...
	I1018 15:02:32.828503  301324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-552434
	I1018 15:02:32.847663  301324 main.go:141] libmachine: Using SSH client type: native
	I1018 15:02:32.848005  301324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1018 15:02:32.848050  301324 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:02:32.983170  301324 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-552434
	
	I1018 15:02:32.983202  301324 ubuntu.go:182] provisioning hostname "pause-552434"
	I1018 15:02:32.983274  301324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-552434
	I1018 15:02:33.001796  301324 main.go:141] libmachine: Using SSH client type: native
	I1018 15:02:33.002203  301324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1018 15:02:33.002228  301324 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-552434 && echo "pause-552434" | sudo tee /etc/hostname
	I1018 15:02:33.150199  301324 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-552434
	
	I1018 15:02:33.150290  301324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-552434
	I1018 15:02:33.169160  301324 main.go:141] libmachine: Using SSH client type: native
	I1018 15:02:33.169418  301324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1018 15:02:33.169442  301324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-552434' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-552434/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-552434' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:02:33.305563  301324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:02:33.305603  301324 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 15:02:33.305645  301324 ubuntu.go:190] setting up certificates
	I1018 15:02:33.305660  301324 provision.go:84] configureAuth start
	I1018 15:02:33.305721  301324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-552434
	I1018 15:02:33.324686  301324 provision.go:143] copyHostCerts
	I1018 15:02:33.324749  301324 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem, removing ...
	I1018 15:02:33.324766  301324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:02:33.324824  301324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 15:02:33.324946  301324 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem, removing ...
	I1018 15:02:33.324961  301324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:02:33.324987  301324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 15:02:33.325073  301324 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem, removing ...
	I1018 15:02:33.325081  301324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:02:33.325102  301324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 15:02:33.325194  301324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.pause-552434 san=[127.0.0.1 192.168.94.2 localhost minikube pause-552434]
	I1018 15:02:33.487106  301324 provision.go:177] copyRemoteCerts
	I1018 15:02:33.487177  301324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:02:33.487217  301324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-552434
	I1018 15:02:33.505659  301324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/pause-552434/id_rsa Username:docker}
	I1018 15:02:33.605836  301324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:02:33.624724  301324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 15:02:33.644270  301324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 15:02:33.663369  301324 provision.go:87] duration metric: took 357.691478ms to configureAuth
	I1018 15:02:33.663429  301324 ubuntu.go:206] setting minikube options for container-runtime
	I1018 15:02:33.663617  301324 config.go:182] Loaded profile config "pause-552434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:02:33.663713  301324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-552434
	I1018 15:02:33.682190  301324 main.go:141] libmachine: Using SSH client type: native
	I1018 15:02:33.682409  301324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1018 15:02:33.682428  301324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:02:33.994576  301324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:02:33.994604  301324 machine.go:96] duration metric: took 1.166159692s to provisionDockerMachine
	I1018 15:02:33.994618  301324 start.go:293] postStartSetup for "pause-552434" (driver="docker")
	I1018 15:02:33.994631  301324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:02:33.994710  301324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:02:33.994757  301324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-552434
	I1018 15:02:34.013958  301324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/pause-552434/id_rsa Username:docker}
	I1018 15:02:34.117578  301324 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:02:34.122509  301324 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 15:02:34.122545  301324 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 15:02:34.122560  301324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/addons for local assets ...
	I1018 15:02:34.122620  301324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/files for local assets ...
	I1018 15:02:34.122716  301324 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem -> 931872.pem in /etc/ssl/certs
	I1018 15:02:34.122830  301324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:02:34.131927  301324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:02:34.152282  301324 start.go:296] duration metric: took 157.647478ms for postStartSetup
	I1018 15:02:34.152394  301324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 15:02:34.152444  301324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-552434
	I1018 15:02:34.172490  301324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/pause-552434/id_rsa Username:docker}
	I1018 15:02:34.271212  301324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 15:02:34.276275  301324 fix.go:56] duration metric: took 1.468192577s for fixHost
	I1018 15:02:34.276309  301324 start.go:83] releasing machines lock for "pause-552434", held for 1.468260144s
	I1018 15:02:34.276373  301324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-552434
	I1018 15:02:34.296665  301324 ssh_runner.go:195] Run: cat /version.json
	I1018 15:02:34.296742  301324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-552434
	I1018 15:02:34.296747  301324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:02:34.296817  301324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-552434
	I1018 15:02:34.319308  301324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/pause-552434/id_rsa Username:docker}
	I1018 15:02:34.319338  301324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/pause-552434/id_rsa Username:docker}
	I1018 15:02:34.472205  301324 ssh_runner.go:195] Run: systemctl --version
	I1018 15:02:34.480140  301324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:02:34.522703  301324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:02:34.527843  301324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:02:34.527899  301324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:02:34.536681  301324 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 15:02:34.536717  301324 start.go:495] detecting cgroup driver to use...
	I1018 15:02:34.536757  301324 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 15:02:34.536813  301324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:02:34.552116  301324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:02:34.566305  301324 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:02:34.566364  301324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:02:34.582134  301324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:02:34.596004  301324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:02:34.705263  301324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:02:34.813835  301324 docker.go:234] disabling docker service ...
	I1018 15:02:34.813906  301324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:02:34.832802  301324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:02:34.845838  301324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:02:34.964432  301324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:02:35.078557  301324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:02:35.091629  301324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:02:35.106160  301324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 15:02:35.106211  301324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:02:35.115958  301324 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 15:02:35.116017  301324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:02:35.124931  301324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:02:35.134265  301324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:02:35.143418  301324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:02:35.152456  301324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:02:35.162019  301324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:02:35.171332  301324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:02:35.181123  301324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:02:35.189292  301324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:02:35.196759  301324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:02:35.307652  301324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:02:35.462342  301324 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:02:35.462413  301324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:02:35.466980  301324 start.go:563] Will wait 60s for crictl version
	I1018 15:02:35.467056  301324 ssh_runner.go:195] Run: which crictl
	I1018 15:02:35.471234  301324 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 15:02:35.498324  301324 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 15:02:35.498410  301324 ssh_runner.go:195] Run: crio --version
	I1018 15:02:35.530000  301324 ssh_runner.go:195] Run: crio --version
	I1018 15:02:35.561510  301324 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 15:02:35.562963  301324 cli_runner.go:164] Run: docker network inspect pause-552434 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:02:35.582191  301324 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 15:02:35.587113  301324 kubeadm.go:883] updating cluster {Name:pause-552434 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-552434 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 15:02:35.587231  301324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:02:35.587282  301324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:02:35.620848  301324 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:02:35.620879  301324 crio.go:433] Images already preloaded, skipping extraction
	I1018 15:02:35.620952  301324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:02:35.653756  301324 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:02:35.653782  301324 cache_images.go:85] Images are preloaded, skipping loading
	I1018 15:02:35.653792  301324 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1018 15:02:35.653962  301324 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-552434 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-552434 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 15:02:35.654055  301324 ssh_runner.go:195] Run: crio config
	I1018 15:02:35.707397  301324 cni.go:84] Creating CNI manager for ""
	I1018 15:02:35.707420  301324 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:02:35.707436  301324 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 15:02:35.707458  301324 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-552434 NodeName:pause-552434 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 15:02:35.707573  301324 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-552434"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 15:02:35.707630  301324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 15:02:35.716863  301324 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 15:02:35.716957  301324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 15:02:35.724668  301324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1018 15:02:35.736866  301324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 15:02:35.750310  301324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1018 15:02:35.762983  301324 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 15:02:35.766836  301324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:02:35.872314  301324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:02:35.886180  301324 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434 for IP: 192.168.94.2
	I1018 15:02:35.886200  301324 certs.go:195] generating shared ca certs ...
	I1018 15:02:35.886215  301324 certs.go:227] acquiring lock for ca certs: {Name:mk6d3b4e44856f498157352ffe8bb89d2c7a3998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:02:35.886368  301324 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key
	I1018 15:02:35.886433  301324 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key
	I1018 15:02:35.886449  301324 certs.go:257] generating profile certs ...
	I1018 15:02:35.886528  301324 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434/client.key
	I1018 15:02:35.886572  301324 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434/apiserver.key.74108efa
	I1018 15:02:35.886610  301324 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434/proxy-client.key
	I1018 15:02:35.886714  301324 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem (1338 bytes)
	W1018 15:02:35.886754  301324 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187_empty.pem, impossibly tiny 0 bytes
	I1018 15:02:35.886764  301324 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 15:02:35.886785  301324 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem (1082 bytes)
	I1018 15:02:35.886806  301324 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem (1123 bytes)
	I1018 15:02:35.886826  301324 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem (1675 bytes)
	I1018 15:02:35.886866  301324 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:02:35.887455  301324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 15:02:35.906399  301324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 15:02:35.924551  301324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 15:02:35.942571  301324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 15:02:35.960148  301324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 15:02:35.979016  301324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 15:02:35.997954  301324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 15:02:36.016366  301324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 15:02:36.033508  301324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem --> /usr/share/ca-certificates/93187.pem (1338 bytes)
	I1018 15:02:36.050878  301324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /usr/share/ca-certificates/931872.pem (1708 bytes)
	I1018 15:02:36.069218  301324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 15:02:36.087539  301324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 15:02:36.100386  301324 ssh_runner.go:195] Run: openssl version
	I1018 15:02:36.106750  301324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/931872.pem && ln -fs /usr/share/ca-certificates/931872.pem /etc/ssl/certs/931872.pem"
	I1018 15:02:36.115330  301324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/931872.pem
	I1018 15:02:36.119143  301324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 14:26 /usr/share/ca-certificates/931872.pem
	I1018 15:02:36.119200  301324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/931872.pem
	I1018 15:02:36.153316  301324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/931872.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 15:02:36.161981  301324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 15:02:36.170777  301324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:02:36.174722  301324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:02:36.174771  301324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:02:36.208907  301324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 15:02:36.218651  301324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93187.pem && ln -fs /usr/share/ca-certificates/93187.pem /etc/ssl/certs/93187.pem"
	I1018 15:02:36.227627  301324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93187.pem
	I1018 15:02:36.231722  301324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 14:26 /usr/share/ca-certificates/93187.pem
	I1018 15:02:36.231778  301324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93187.pem
	I1018 15:02:36.266463  301324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93187.pem /etc/ssl/certs/51391683.0"
	I1018 15:02:36.275378  301324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 15:02:36.279637  301324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 15:02:36.313435  301324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 15:02:36.350757  301324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 15:02:36.385713  301324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 15:02:36.423198  301324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 15:02:36.462495  301324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 15:02:36.498195  301324 kubeadm.go:400] StartCluster: {Name:pause-552434 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-552434 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:02:36.498306  301324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 15:02:36.498354  301324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 15:02:36.529502  301324 cri.go:89] found id: "83e63972ae32f350d8fe81115bad361203b44a85b2d3034e9cffa0da575020a6"
	I1018 15:02:36.529525  301324 cri.go:89] found id: "e38d3fecd94dce09c272919a6b3a4893b97e5e77a057ca49c43c1ef2d6a20b6a"
	I1018 15:02:36.529531  301324 cri.go:89] found id: "15f33accd9028684dd1a5553833f500450d0eb21fbbe9b87813fa2c338823e84"
	I1018 15:02:36.529535  301324 cri.go:89] found id: "f07b384914dac3fb4353daeb697eeceec6d1144cacc371aa0b6ae1c6306ecbf2"
	I1018 15:02:36.529540  301324 cri.go:89] found id: "309d2698f21c03ab93a63ac602bee45208d646cc594c6c20bf1b7fde417c6c8e"
	I1018 15:02:36.529544  301324 cri.go:89] found id: "9b700ddbcc14c420cbc0df381ae301fef7d682c6ca669b88a2fa3253285eb9bf"
	I1018 15:02:36.529548  301324 cri.go:89] found id: "ef0a486a7b8571324c6d2501c8769bc1892c74983b0a30f02f7e96ff6327aaad"
	I1018 15:02:36.529551  301324 cri.go:89] found id: ""
	I1018 15:02:36.529597  301324 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 15:02:36.541433  301324 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:02:36Z" level=error msg="open /run/runc: no such file or directory"
	I1018 15:02:36.541507  301324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 15:02:36.550040  301324 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 15:02:36.550070  301324 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 15:02:36.550119  301324 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 15:02:36.558080  301324 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 15:02:36.558818  301324 kubeconfig.go:125] found "pause-552434" server: "https://192.168.94.2:8443"
	I1018 15:02:36.559884  301324 kapi.go:59] client config for pause-552434: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434/client.key", CAFile:"/home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 15:02:36.560509  301324 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 15:02:36.560529  301324 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 15:02:36.560537  301324 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 15:02:36.560543  301324 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 15:02:36.560556  301324 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 15:02:36.560967  301324 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 15:02:36.569681  301324 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1018 15:02:36.569712  301324 kubeadm.go:601] duration metric: took 19.637191ms to restartPrimaryControlPlane
	I1018 15:02:36.569722  301324 kubeadm.go:402] duration metric: took 71.539761ms to StartCluster
	I1018 15:02:36.569739  301324 settings.go:142] acquiring lock: {Name:mk9d9d72167811b3554435e137ed17137bf54a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:02:36.569805  301324 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:02:36.570686  301324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/kubeconfig: {Name:mkdc6fbc9be8e57447490b30dd5bb0086611876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:02:36.570907  301324 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:02:36.571018  301324 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 15:02:36.571179  301324 config.go:182] Loaded profile config "pause-552434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:02:36.573598  301324 out.go:179] * Verifying Kubernetes components...
	I1018 15:02:36.573600  301324 out.go:179] * Enabled addons: 
	I1018 15:02:36.575393  301324 addons.go:514] duration metric: took 4.385342ms for enable addons: enabled=[]
	I1018 15:02:36.575427  301324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:02:36.682625  301324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:02:36.696785  301324 node_ready.go:35] waiting up to 6m0s for node "pause-552434" to be "Ready" ...
	I1018 15:02:36.704713  301324 node_ready.go:49] node "pause-552434" is "Ready"
	I1018 15:02:36.704741  301324 node_ready.go:38] duration metric: took 7.903085ms for node "pause-552434" to be "Ready" ...
	I1018 15:02:36.704756  301324 api_server.go:52] waiting for apiserver process to appear ...
	I1018 15:02:36.704801  301324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:02:36.717155  301324 api_server.go:72] duration metric: took 146.188687ms to wait for apiserver process to appear ...
	I1018 15:02:36.717178  301324 api_server.go:88] waiting for apiserver healthz status ...
	I1018 15:02:36.717196  301324 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 15:02:36.723202  301324 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1018 15:02:36.724262  301324 api_server.go:141] control plane version: v1.34.1
	I1018 15:02:36.724289  301324 api_server.go:131] duration metric: took 7.103393ms to wait for apiserver health ...
	I1018 15:02:36.724299  301324 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:02:36.727268  301324 system_pods.go:59] 7 kube-system pods found
	I1018 15:02:36.727298  301324 system_pods.go:61] "coredns-66bc5c9577-r2jd5" [59ff8c5d-ba04-4834-90ef-4b06698de3ac] Running
	I1018 15:02:36.727303  301324 system_pods.go:61] "etcd-pause-552434" [94ca4ecc-4d9d-488d-bb76-d9cd3ed363f5] Running
	I1018 15:02:36.727306  301324 system_pods.go:61] "kindnet-5tfsb" [2f8689fe-1eef-4420-8d8f-d4012de98a9d] Running
	I1018 15:02:36.727310  301324 system_pods.go:61] "kube-apiserver-pause-552434" [d923ad4f-a12d-479c-bed5-a401d9f5daea] Running
	I1018 15:02:36.727314  301324 system_pods.go:61] "kube-controller-manager-pause-552434" [1e0a1c6d-bb53-438f-b921-ab4f8af3055b] Running
	I1018 15:02:36.727317  301324 system_pods.go:61] "kube-proxy-kzg2k" [0b01b586-ec2b-4750-8f4d-a3967577be4d] Running
	I1018 15:02:36.727321  301324 system_pods.go:61] "kube-scheduler-pause-552434" [3085fb0b-2dc4-48a7-a445-365a365f0cfc] Running
	I1018 15:02:36.727327  301324 system_pods.go:74] duration metric: took 3.021139ms to wait for pod list to return data ...
	I1018 15:02:36.727335  301324 default_sa.go:34] waiting for default service account to be created ...
	I1018 15:02:36.729993  301324 default_sa.go:45] found service account: "default"
	I1018 15:02:36.730022  301324 default_sa.go:55] duration metric: took 2.677689ms for default service account to be created ...
	I1018 15:02:36.730032  301324 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 15:02:36.732589  301324 system_pods.go:86] 7 kube-system pods found
	I1018 15:02:36.732615  301324 system_pods.go:89] "coredns-66bc5c9577-r2jd5" [59ff8c5d-ba04-4834-90ef-4b06698de3ac] Running
	I1018 15:02:36.732620  301324 system_pods.go:89] "etcd-pause-552434" [94ca4ecc-4d9d-488d-bb76-d9cd3ed363f5] Running
	I1018 15:02:36.732624  301324 system_pods.go:89] "kindnet-5tfsb" [2f8689fe-1eef-4420-8d8f-d4012de98a9d] Running
	I1018 15:02:36.732630  301324 system_pods.go:89] "kube-apiserver-pause-552434" [d923ad4f-a12d-479c-bed5-a401d9f5daea] Running
	I1018 15:02:36.732633  301324 system_pods.go:89] "kube-controller-manager-pause-552434" [1e0a1c6d-bb53-438f-b921-ab4f8af3055b] Running
	I1018 15:02:36.732646  301324 system_pods.go:89] "kube-proxy-kzg2k" [0b01b586-ec2b-4750-8f4d-a3967577be4d] Running
	I1018 15:02:36.732650  301324 system_pods.go:89] "kube-scheduler-pause-552434" [3085fb0b-2dc4-48a7-a445-365a365f0cfc] Running
	I1018 15:02:36.732656  301324 system_pods.go:126] duration metric: took 2.618387ms to wait for k8s-apps to be running ...
	I1018 15:02:36.732667  301324 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 15:02:36.732711  301324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:02:36.746295  301324 system_svc.go:56] duration metric: took 13.613844ms WaitForService to wait for kubelet
	I1018 15:02:36.746322  301324 kubeadm.go:586] duration metric: took 175.363187ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:02:36.746345  301324 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:02:36.749096  301324 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 15:02:36.749139  301324 node_conditions.go:123] node cpu capacity is 8
	I1018 15:02:36.749150  301324 node_conditions.go:105] duration metric: took 2.799625ms to run NodePressure ...
	I1018 15:02:36.749162  301324 start.go:241] waiting for startup goroutines ...
	I1018 15:02:36.749171  301324 start.go:246] waiting for cluster config update ...
	I1018 15:02:36.749181  301324 start.go:255] writing updated cluster config ...
	I1018 15:02:36.749474  301324 ssh_runner.go:195] Run: rm -f paused
	I1018 15:02:36.753694  301324 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:02:36.754341  301324 kapi.go:59] client config for pause-552434: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434/client.key", CAFile:"/home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 15:02:36.757434  301324 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r2jd5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:02:36.761971  301324 pod_ready.go:94] pod "coredns-66bc5c9577-r2jd5" is "Ready"
	I1018 15:02:36.761996  301324 pod_ready.go:86] duration metric: took 4.540912ms for pod "coredns-66bc5c9577-r2jd5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:02:36.764072  301324 pod_ready.go:83] waiting for pod "etcd-pause-552434" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:02:36.767730  301324 pod_ready.go:94] pod "etcd-pause-552434" is "Ready"
	I1018 15:02:36.767752  301324 pod_ready.go:86] duration metric: took 3.659836ms for pod "etcd-pause-552434" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:02:36.769473  301324 pod_ready.go:83] waiting for pod "kube-apiserver-pause-552434" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:02:36.773193  301324 pod_ready.go:94] pod "kube-apiserver-pause-552434" is "Ready"
	I1018 15:02:36.773213  301324 pod_ready.go:86] duration metric: took 3.721191ms for pod "kube-apiserver-pause-552434" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:02:36.775357  301324 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-552434" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:02:37.158572  301324 pod_ready.go:94] pod "kube-controller-manager-pause-552434" is "Ready"
	I1018 15:02:37.158603  301324 pod_ready.go:86] duration metric: took 383.217374ms for pod "kube-controller-manager-pause-552434" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:02:37.358031  301324 pod_ready.go:83] waiting for pod "kube-proxy-kzg2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:02:37.758261  301324 pod_ready.go:94] pod "kube-proxy-kzg2k" is "Ready"
	I1018 15:02:37.758287  301324 pod_ready.go:86] duration metric: took 400.23036ms for pod "kube-proxy-kzg2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:02:37.958545  301324 pod_ready.go:83] waiting for pod "kube-scheduler-pause-552434" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:02:38.358139  301324 pod_ready.go:94] pod "kube-scheduler-pause-552434" is "Ready"
	I1018 15:02:38.358165  301324 pod_ready.go:86] duration metric: took 399.595165ms for pod "kube-scheduler-pause-552434" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:02:38.358176  301324 pod_ready.go:40] duration metric: took 1.604432947s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:02:38.404572  301324 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:02:38.406548  301324 out.go:179] * Done! kubectl is now configured to use "pause-552434" cluster and "default" namespace by default
	I1018 15:02:34.424456  278049 logs.go:123] Gathering logs for dmesg ...
	I1018 15:02:34.424492  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 15:02:34.440247  278049 logs.go:123] Gathering logs for describe nodes ...
	I1018 15:02:34.440285  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 15:02:34.499747  278049 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 15:02:37.001371  278049 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 15:02:37.001904  278049 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1018 15:02:37.001999  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 15:02:37.002061  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 15:02:37.032027  278049 cri.go:89] found id: "9fd193852421eda1fa8af182d3f4e060e771279c135421267bf2b3e8351b4e9b"
	I1018 15:02:37.032058  278049 cri.go:89] found id: ""
	I1018 15:02:37.032070  278049 logs.go:282] 1 containers: [9fd193852421eda1fa8af182d3f4e060e771279c135421267bf2b3e8351b4e9b]
	I1018 15:02:37.032134  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:02:37.036260  278049 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 15:02:37.036335  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 15:02:37.063360  278049 cri.go:89] found id: ""
	I1018 15:02:37.063389  278049 logs.go:282] 0 containers: []
	W1018 15:02:37.063400  278049 logs.go:284] No container was found matching "etcd"
	I1018 15:02:37.063408  278049 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 15:02:37.063468  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 15:02:37.092061  278049 cri.go:89] found id: ""
	I1018 15:02:37.092087  278049 logs.go:282] 0 containers: []
	W1018 15:02:37.092095  278049 logs.go:284] No container was found matching "coredns"
	I1018 15:02:37.092100  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 15:02:37.092157  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 15:02:37.121049  278049 cri.go:89] found id: "3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:02:37.121085  278049 cri.go:89] found id: ""
	I1018 15:02:37.121097  278049 logs.go:282] 1 containers: [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011]
	I1018 15:02:37.121156  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:02:37.125211  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 15:02:37.125281  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 15:02:37.153983  278049 cri.go:89] found id: ""
	I1018 15:02:37.154015  278049 logs.go:282] 0 containers: []
	W1018 15:02:37.154027  278049 logs.go:284] No container was found matching "kube-proxy"
	I1018 15:02:37.154038  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 15:02:37.154101  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 15:02:37.185159  278049 cri.go:89] found id: "2c7da3e123fe889df18ea7aa9245556c2393bbb48c6b8d832286db1eeafa4b7a"
	I1018 15:02:37.185185  278049 cri.go:89] found id: ""
	I1018 15:02:37.185194  278049 logs.go:282] 1 containers: [2c7da3e123fe889df18ea7aa9245556c2393bbb48c6b8d832286db1eeafa4b7a]
	I1018 15:02:37.185242  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:02:37.189498  278049 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 15:02:37.189571  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 15:02:37.220256  278049 cri.go:89] found id: ""
	I1018 15:02:37.220288  278049 logs.go:282] 0 containers: []
	W1018 15:02:37.220299  278049 logs.go:284] No container was found matching "kindnet"
	I1018 15:02:37.220307  278049 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 15:02:37.220371  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 15:02:37.251909  278049 cri.go:89] found id: ""
	I1018 15:02:37.251956  278049 logs.go:282] 0 containers: []
	W1018 15:02:37.251965  278049 logs.go:284] No container was found matching "storage-provisioner"
	I1018 15:02:37.251974  278049 logs.go:123] Gathering logs for container status ...
	I1018 15:02:37.251992  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 15:02:37.284712  278049 logs.go:123] Gathering logs for kubelet ...
	I1018 15:02:37.284738  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 15:02:37.351653  278049 logs.go:123] Gathering logs for dmesg ...
	I1018 15:02:37.351692  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 15:02:37.367328  278049 logs.go:123] Gathering logs for describe nodes ...
	I1018 15:02:37.367353  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 15:02:37.422758  278049 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 15:02:37.422784  278049 logs.go:123] Gathering logs for kube-apiserver [9fd193852421eda1fa8af182d3f4e060e771279c135421267bf2b3e8351b4e9b] ...
	I1018 15:02:37.422796  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fd193852421eda1fa8af182d3f4e060e771279c135421267bf2b3e8351b4e9b"
	I1018 15:02:37.456781  278049 logs.go:123] Gathering logs for kube-scheduler [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011] ...
	I1018 15:02:37.456815  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:02:37.500007  278049 logs.go:123] Gathering logs for kube-controller-manager [2c7da3e123fe889df18ea7aa9245556c2393bbb48c6b8d832286db1eeafa4b7a] ...
	I1018 15:02:37.500043  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2c7da3e123fe889df18ea7aa9245556c2393bbb48c6b8d832286db1eeafa4b7a"
	I1018 15:02:37.527653  278049 logs.go:123] Gathering logs for CRI-O ...
	I1018 15:02:37.527683  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 15:02:34.991930  299500 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/proxy-client.crt ...
	I1018 15:02:34.991971  299500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/proxy-client.crt: {Name:mk339da6ab9b605b4b48331fe310d154bce8996d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:02:34.992154  299500 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/proxy-client.key ...
	I1018 15:02:34.992180  299500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/proxy-client.key: {Name:mk370a20b184003cc1f9791bed217e456bd99bb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:02:34.992291  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 15:02:34.992316  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 15:02:34.992333  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 15:02:34.992369  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 15:02:34.992384  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 15:02:34.992403  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 15:02:34.992420  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 15:02:34.992432  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 15:02:34.992481  299500 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem (1338 bytes)
	W1018 15:02:34.992529  299500 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187_empty.pem, impossibly tiny 0 bytes
	I1018 15:02:34.992543  299500 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 15:02:34.992572  299500 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem (1082 bytes)
	I1018 15:02:34.992599  299500 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem (1123 bytes)
	I1018 15:02:34.992632  299500 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem (1675 bytes)
	I1018 15:02:34.992698  299500 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:02:34.992739  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:02:34.992759  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem -> /usr/share/ca-certificates/93187.pem
	I1018 15:02:34.992776  299500 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem -> /usr/share/ca-certificates/931872.pem
	I1018 15:02:34.993557  299500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 15:02:35.016627  299500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 15:02:35.034998  299500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 15:02:35.052588  299500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 15:02:35.071114  299500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1018 15:02:35.089615  299500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1018 15:02:35.108494  299500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 15:02:35.126849  299500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/force-systemd-env-680592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 15:02:35.146366  299500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 15:02:35.166849  299500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem --> /usr/share/ca-certificates/93187.pem (1338 bytes)
	I1018 15:02:35.185956  299500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /usr/share/ca-certificates/931872.pem (1708 bytes)
	I1018 15:02:35.204482  299500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 15:02:35.218610  299500 ssh_runner.go:195] Run: openssl version
	I1018 15:02:35.224992  299500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 15:02:35.236803  299500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:02:35.241333  299500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:02:35.241405  299500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:02:35.276308  299500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 15:02:35.285799  299500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93187.pem && ln -fs /usr/share/ca-certificates/93187.pem /etc/ssl/certs/93187.pem"
	I1018 15:02:35.294531  299500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93187.pem
	I1018 15:02:35.298530  299500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 14:26 /usr/share/ca-certificates/93187.pem
	I1018 15:02:35.298587  299500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93187.pem
	I1018 15:02:35.335410  299500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93187.pem /etc/ssl/certs/51391683.0"
	I1018 15:02:35.344942  299500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/931872.pem && ln -fs /usr/share/ca-certificates/931872.pem /etc/ssl/certs/931872.pem"
	I1018 15:02:35.354221  299500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/931872.pem
	I1018 15:02:35.358210  299500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 14:26 /usr/share/ca-certificates/931872.pem
	I1018 15:02:35.358265  299500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/931872.pem
	I1018 15:02:35.395812  299500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/931872.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 15:02:35.405762  299500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 15:02:35.410050  299500 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 15:02:35.410120  299500 kubeadm.go:400] StartCluster: {Name:force-systemd-env-680592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-680592 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:02:35.410225  299500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 15:02:35.410285  299500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 15:02:35.440100  299500 cri.go:89] found id: ""
	I1018 15:02:35.440175  299500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 15:02:35.449063  299500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 15:02:35.458259  299500 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 15:02:35.458327  299500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 15:02:35.467400  299500 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 15:02:35.467422  299500 kubeadm.go:157] found existing configuration files:
	
	I1018 15:02:35.467470  299500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 15:02:35.476290  299500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 15:02:35.476354  299500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 15:02:35.484773  299500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 15:02:35.494269  299500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 15:02:35.494347  299500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 15:02:35.503165  299500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 15:02:35.511844  299500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 15:02:35.511905  299500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 15:02:35.520732  299500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 15:02:35.529860  299500 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 15:02:35.529945  299500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 15:02:35.538300  299500 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 15:02:35.622123  299500 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 15:02:35.690841  299500 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.399120901Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.399962636Z" level=info msg="Conmon does support the --sync option"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.399987579Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.400005704Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.40072011Z" level=info msg="Conmon does support the --sync option"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.400745069Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.405066102Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.405090262Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.405741283Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.406316366Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.406381098Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.412315455Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.455577369Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-r2jd5 Namespace:kube-system ID:e7d1e2e95184ff8e598139c8d821bcc60376cfc3a57116f31a2582393f250dee UID:59ff8c5d-ba04-4834-90ef-4b06698de3ac NetNS:/var/run/netns/94ed696c-1b43-4e6d-aa7a-9243dfee1d3e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b750}] Aliases:map[]}"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.455762946Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-r2jd5 for CNI network kindnet (type=ptp)"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.456332979Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.456360813Z" level=info msg="Starting seccomp notifier watcher"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.457028662Z" level=info msg="Create NRI interface"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.457977277Z" level=info msg="built-in NRI default validator is disabled"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.458007662Z" level=info msg="runtime interface created"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.458023897Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.458031548Z" level=info msg="runtime interface starting up..."
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.458040241Z" level=info msg="starting plugins..."
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.45806013Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.458459685Z" level=info msg="No systemd watchdog enabled"
	Oct 18 15:02:35 pause-552434 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	83e63972ae32f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   10 seconds ago       Running             coredns                   0                   e7d1e2e95184f       coredns-66bc5c9577-r2jd5               kube-system
	e38d3fecd94dc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   51 seconds ago       Running             kindnet-cni               0                   1df0b8156c62c       kindnet-5tfsb                          kube-system
	15f33accd9028       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   51 seconds ago       Running             kube-proxy                0                   ed0e256aef7eb       kube-proxy-kzg2k                       kube-system
	f07b384914dac       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Running             kube-controller-manager   0                   776b02b2ccd8a       kube-controller-manager-pause-552434   kube-system
	309d2698f21c0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Running             kube-apiserver            0                   8c79de007d070       kube-apiserver-pause-552434            kube-system
	9b700ddbcc14c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Running             etcd                      0                   4d78916223262       etcd-pause-552434                      kube-system
	ef0a486a7b857       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Running             kube-scheduler            0                   da39c1c7bf8f7       kube-scheduler-pause-552434            kube-system
	
	
	==> coredns [83e63972ae32f350d8fe81115bad361203b44a85b2d3034e9cffa0da575020a6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34645 - 42471 "HINFO IN 3979858299681464764.7361797870785617365. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.090876185s
	
	
	==> describe nodes <==
	Name:               pause-552434
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-552434
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=pause-552434
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_01_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:01:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-552434
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:02:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:02:34 +0000   Sat, 18 Oct 2025 15:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:02:34 +0000   Sat, 18 Oct 2025 15:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:02:34 +0000   Sat, 18 Oct 2025 15:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 15:02:34 +0000   Sat, 18 Oct 2025 15:02:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-552434
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                9f95a6d0-dc9a-4614-8417-adf252bb5a61
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-r2jd5                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     52s
	  kube-system                 etcd-pause-552434                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         58s
	  kube-system                 kindnet-5tfsb                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      52s
	  kube-system                 kube-apiserver-pause-552434             250m (3%)     0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-controller-manager-pause-552434    200m (2%)     0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-proxy-kzg2k                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-scheduler-pause-552434             100m (1%)     0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 51s                kube-proxy       
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x5 over 63s)  kubelet          Node pause-552434 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x4 over 63s)  kubelet          Node pause-552434 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x4 over 63s)  kubelet          Node pause-552434 status is now: NodeHasSufficientPID
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s                kubelet          Node pause-552434 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s                kubelet          Node pause-552434 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s                kubelet          Node pause-552434 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node pause-552434 event: Registered Node pause-552434 in Controller
	  Normal  NodeReady                11s                kubelet          Node pause-552434 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [9b700ddbcc14c420cbc0df381ae301fef7d682c6ca669b88a2fa3253285eb9bf] <==
	{"level":"warn","ts":"2025-10-18T15:01:40.729393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.736359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.743081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.752351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.758637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.765026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.771705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.778250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.784542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.800356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.809077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.815434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.822553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.828866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.841308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.847470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.853645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.860778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.867223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.874273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.881252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.894621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.904732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.911739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.966425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41586","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:02:41 up  2:45,  0 user,  load average: 3.13, 2.11, 1.50
	Linux pause-552434 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e38d3fecd94dce09c272919a6b3a4893b97e5e77a057ca49c43c1ef2d6a20b6a] <==
	I1018 15:01:50.064563       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 15:01:50.064860       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1018 15:01:50.065060       1 main.go:148] setting mtu 1500 for CNI 
	I1018 15:01:50.065087       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 15:01:50.065123       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T15:01:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 15:01:50.271207       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 15:01:50.272421       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 15:01:50.272501       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 15:01:50.272759       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 15:02:20.271506       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 15:02:20.272444       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 15:02:20.272480       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 15:02:20.272487       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1018 15:02:21.773241       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 15:02:21.773277       1 metrics.go:72] Registering metrics
	I1018 15:02:21.773373       1 controller.go:711] "Syncing nftables rules"
	I1018 15:02:30.274046       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 15:02:30.274085       1 main.go:301] handling current node
	I1018 15:02:40.274023       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 15:02:40.274077       1 main.go:301] handling current node
	
	
	==> kube-apiserver [309d2698f21c03ab93a63ac602bee45208d646cc594c6c20bf1b7fde417c6c8e] <==
	I1018 15:01:41.439458       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 15:01:41.439496       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1018 15:01:41.443787       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:01:41.445236       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 15:01:41.452027       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:01:41.452506       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 15:01:41.452533       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 15:01:41.627481       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 15:01:42.335203       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 15:01:42.339005       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 15:01:42.339024       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:01:42.830844       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:01:42.871975       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:01:42.940685       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 15:01:42.951018       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1018 15:01:42.952615       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 15:01:42.957864       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 15:01:43.364088       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 15:01:43.799992       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 15:01:43.817104       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 15:01:43.832573       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 15:01:49.019650       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 15:01:49.220449       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:01:49.226425       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:01:49.467746       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [f07b384914dac3fb4353daeb697eeceec6d1144cacc371aa0b6ae1c6306ecbf2] <==
	I1018 15:01:48.362985       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 15:01:48.363010       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 15:01:48.363028       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 15:01:48.363115       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 15:01:48.363227       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 15:01:48.363255       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 15:01:48.363350       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-552434"
	I1018 15:01:48.363427       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 15:01:48.363795       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 15:01:48.363589       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 15:01:48.363585       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 15:01:48.363705       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 15:01:48.363720       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 15:01:48.363731       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 15:01:48.364528       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 15:01:48.364622       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 15:01:48.365075       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 15:01:48.365559       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 15:01:48.365577       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 15:01:48.367562       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 15:01:48.371510       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 15:01:48.373403       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 15:01:48.381464       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 15:01:48.384323       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 15:02:33.370846       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [15f33accd9028684dd1a5553833f500450d0eb21fbbe9b87813fa2c338823e84] <==
	I1018 15:01:49.905078       1 server_linux.go:53] "Using iptables proxy"
	I1018 15:01:49.965808       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 15:01:50.066379       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 15:01:50.066420       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1018 15:01:50.066554       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 15:01:50.093938       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 15:01:50.094023       1 server_linux.go:132] "Using iptables Proxier"
	I1018 15:01:50.101045       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 15:01:50.101703       1 server.go:527] "Version info" version="v1.34.1"
	I1018 15:01:50.101736       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:01:50.105705       1 config.go:200] "Starting service config controller"
	I1018 15:01:50.105728       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 15:01:50.105765       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 15:01:50.105771       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 15:01:50.105864       1 config.go:309] "Starting node config controller"
	I1018 15:01:50.105880       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 15:01:50.105888       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 15:01:50.106042       1 config.go:106] "Starting endpoint slice config controller"
	I1018 15:01:50.106057       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 15:01:50.205806       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 15:01:50.205932       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 15:01:50.207132       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ef0a486a7b8571324c6d2501c8769bc1892c74983b0a30f02f7e96ff6327aaad] <==
	E1018 15:01:41.388243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 15:01:41.388326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 15:01:41.388348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 15:01:41.388359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 15:01:41.388371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 15:01:41.388389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 15:01:41.388450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 15:01:41.388484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 15:01:41.388549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 15:01:41.388515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 15:01:41.388540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 15:01:41.388628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 15:01:42.194960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 15:01:42.207374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 15:01:42.269631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 15:01:42.308355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 15:01:42.315558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 15:01:42.317455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 15:01:42.361896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 15:01:42.460362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 15:01:42.577019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 15:01:42.581076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 15:01:42.623630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 15:01:42.628865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1018 15:01:45.086297       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 15:01:44 pause-552434 kubelet[1287]: E1018 15:01:44.734262    1287 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-552434\" already exists" pod="kube-system/kube-apiserver-pause-552434"
	Oct 18 15:01:44 pause-552434 kubelet[1287]: I1018 15:01:44.755335    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-552434" podStartSLOduration=1.75531555 podStartE2EDuration="1.75531555s" podCreationTimestamp="2025-10-18 15:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:01:44.755265921 +0000 UTC m=+1.174431095" watchObservedRunningTime="2025-10-18 15:01:44.75531555 +0000 UTC m=+1.174480725"
	Oct 18 15:01:44 pause-552434 kubelet[1287]: I1018 15:01:44.785062    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-552434" podStartSLOduration=1.785040871 podStartE2EDuration="1.785040871s" podCreationTimestamp="2025-10-18 15:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:01:44.772126305 +0000 UTC m=+1.191291478" watchObservedRunningTime="2025-10-18 15:01:44.785040871 +0000 UTC m=+1.204206044"
	Oct 18 15:01:44 pause-552434 kubelet[1287]: I1018 15:01:44.785166    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-552434" podStartSLOduration=1.785158714 podStartE2EDuration="1.785158714s" podCreationTimestamp="2025-10-18 15:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:01:44.784987035 +0000 UTC m=+1.204152208" watchObservedRunningTime="2025-10-18 15:01:44.785158714 +0000 UTC m=+1.204323885"
	Oct 18 15:01:44 pause-552434 kubelet[1287]: I1018 15:01:44.821006    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-552434" podStartSLOduration=1.8209800889999999 podStartE2EDuration="1.820980089s" podCreationTimestamp="2025-10-18 15:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:01:44.800724575 +0000 UTC m=+1.219889734" watchObservedRunningTime="2025-10-18 15:01:44.820980089 +0000 UTC m=+1.240145264"
	Oct 18 15:01:48 pause-552434 kubelet[1287]: I1018 15:01:48.429876    1287 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 15:01:48 pause-552434 kubelet[1287]: I1018 15:01:48.430686    1287 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 15:01:49 pause-552434 kubelet[1287]: I1018 15:01:49.505120    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b01b586-ec2b-4750-8f4d-a3967577be4d-lib-modules\") pod \"kube-proxy-kzg2k\" (UID: \"0b01b586-ec2b-4750-8f4d-a3967577be4d\") " pod="kube-system/kube-proxy-kzg2k"
	Oct 18 15:01:49 pause-552434 kubelet[1287]: I1018 15:01:49.505183    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s64bp\" (UniqueName: \"kubernetes.io/projected/2f8689fe-1eef-4420-8d8f-d4012de98a9d-kube-api-access-s64bp\") pod \"kindnet-5tfsb\" (UID: \"2f8689fe-1eef-4420-8d8f-d4012de98a9d\") " pod="kube-system/kindnet-5tfsb"
	Oct 18 15:01:49 pause-552434 kubelet[1287]: I1018 15:01:49.505219    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f8689fe-1eef-4420-8d8f-d4012de98a9d-xtables-lock\") pod \"kindnet-5tfsb\" (UID: \"2f8689fe-1eef-4420-8d8f-d4012de98a9d\") " pod="kube-system/kindnet-5tfsb"
	Oct 18 15:01:49 pause-552434 kubelet[1287]: I1018 15:01:49.505247    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b01b586-ec2b-4750-8f4d-a3967577be4d-kube-proxy\") pod \"kube-proxy-kzg2k\" (UID: \"0b01b586-ec2b-4750-8f4d-a3967577be4d\") " pod="kube-system/kube-proxy-kzg2k"
	Oct 18 15:01:49 pause-552434 kubelet[1287]: I1018 15:01:49.505267    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2f8689fe-1eef-4420-8d8f-d4012de98a9d-cni-cfg\") pod \"kindnet-5tfsb\" (UID: \"2f8689fe-1eef-4420-8d8f-d4012de98a9d\") " pod="kube-system/kindnet-5tfsb"
	Oct 18 15:01:49 pause-552434 kubelet[1287]: I1018 15:01:49.505288    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f8689fe-1eef-4420-8d8f-d4012de98a9d-lib-modules\") pod \"kindnet-5tfsb\" (UID: \"2f8689fe-1eef-4420-8d8f-d4012de98a9d\") " pod="kube-system/kindnet-5tfsb"
	Oct 18 15:01:49 pause-552434 kubelet[1287]: I1018 15:01:49.505344    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b01b586-ec2b-4750-8f4d-a3967577be4d-xtables-lock\") pod \"kube-proxy-kzg2k\" (UID: \"0b01b586-ec2b-4750-8f4d-a3967577be4d\") " pod="kube-system/kube-proxy-kzg2k"
	Oct 18 15:01:49 pause-552434 kubelet[1287]: I1018 15:01:49.505392    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btvsl\" (UniqueName: \"kubernetes.io/projected/0b01b586-ec2b-4750-8f4d-a3967577be4d-kube-api-access-btvsl\") pod \"kube-proxy-kzg2k\" (UID: \"0b01b586-ec2b-4750-8f4d-a3967577be4d\") " pod="kube-system/kube-proxy-kzg2k"
	Oct 18 15:01:50 pause-552434 kubelet[1287]: I1018 15:01:50.753792    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5tfsb" podStartSLOduration=1.75377274 podStartE2EDuration="1.75377274s" podCreationTimestamp="2025-10-18 15:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:01:50.753663013 +0000 UTC m=+7.172828186" watchObservedRunningTime="2025-10-18 15:01:50.75377274 +0000 UTC m=+7.172937916"
	Oct 18 15:01:50 pause-552434 kubelet[1287]: I1018 15:01:50.763275    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kzg2k" podStartSLOduration=1.763252609 podStartE2EDuration="1.763252609s" podCreationTimestamp="2025-10-18 15:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:01:50.763111713 +0000 UTC m=+7.182276886" watchObservedRunningTime="2025-10-18 15:01:50.763252609 +0000 UTC m=+7.182417780"
	Oct 18 15:02:30 pause-552434 kubelet[1287]: I1018 15:02:30.373834    1287 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 15:02:30 pause-552434 kubelet[1287]: I1018 15:02:30.510028    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg498\" (UniqueName: \"kubernetes.io/projected/59ff8c5d-ba04-4834-90ef-4b06698de3ac-kube-api-access-kg498\") pod \"coredns-66bc5c9577-r2jd5\" (UID: \"59ff8c5d-ba04-4834-90ef-4b06698de3ac\") " pod="kube-system/coredns-66bc5c9577-r2jd5"
	Oct 18 15:02:30 pause-552434 kubelet[1287]: I1018 15:02:30.510134    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59ff8c5d-ba04-4834-90ef-4b06698de3ac-config-volume\") pod \"coredns-66bc5c9577-r2jd5\" (UID: \"59ff8c5d-ba04-4834-90ef-4b06698de3ac\") " pod="kube-system/coredns-66bc5c9577-r2jd5"
	Oct 18 15:02:30 pause-552434 kubelet[1287]: I1018 15:02:30.842345    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-r2jd5" podStartSLOduration=41.842324475 podStartE2EDuration="41.842324475s" podCreationTimestamp="2025-10-18 15:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:02:30.842156194 +0000 UTC m=+47.261321390" watchObservedRunningTime="2025-10-18 15:02:30.842324475 +0000 UTC m=+47.261489647"
	Oct 18 15:02:38 pause-552434 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 15:02:38 pause-552434 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 15:02:38 pause-552434 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 15:02:38 pause-552434 systemd[1]: kubelet.service: Consumed 2.331s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-552434 -n pause-552434
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-552434 -n pause-552434: exit status 2 (348.553702ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-552434 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-552434
helpers_test.go:243: (dbg) docker inspect pause-552434:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f6579505a3b2004fab034e191688657e1ce18377911586a8a9ccdde1fc41a551",
	        "Created": "2025-10-18T15:01:27.758869286Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 285828,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T15:01:27.828554989Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/f6579505a3b2004fab034e191688657e1ce18377911586a8a9ccdde1fc41a551/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f6579505a3b2004fab034e191688657e1ce18377911586a8a9ccdde1fc41a551/hostname",
	        "HostsPath": "/var/lib/docker/containers/f6579505a3b2004fab034e191688657e1ce18377911586a8a9ccdde1fc41a551/hosts",
	        "LogPath": "/var/lib/docker/containers/f6579505a3b2004fab034e191688657e1ce18377911586a8a9ccdde1fc41a551/f6579505a3b2004fab034e191688657e1ce18377911586a8a9ccdde1fc41a551-json.log",
	        "Name": "/pause-552434",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-552434:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-552434",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f6579505a3b2004fab034e191688657e1ce18377911586a8a9ccdde1fc41a551",
	                "LowerDir": "/var/lib/docker/overlay2/6fb06321da58a31f868fc65dcd15f68403adaee09c85080f07ca7f60592006b1-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6fb06321da58a31f868fc65dcd15f68403adaee09c85080f07ca7f60592006b1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6fb06321da58a31f868fc65dcd15f68403adaee09c85080f07ca7f60592006b1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6fb06321da58a31f868fc65dcd15f68403adaee09c85080f07ca7f60592006b1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-552434",
	                "Source": "/var/lib/docker/volumes/pause-552434/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-552434",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-552434",
	                "name.minikube.sigs.k8s.io": "pause-552434",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "29900e711d7824d6bfa6d53d7d115550dfd04199eb809588c99c4aef4385f550",
	            "SandboxKey": "/var/run/docker/netns/29900e711d78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33003"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33004"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33007"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33005"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33006"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-552434": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:2e:65:b2:ea:48",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "51c247ddc726a0b307ab7cffda8e1a5a54da0b1db9b37c1506ebaa8b40b84775",
	                    "EndpointID": "d7acf4088ea5aedf676fd01131878477619e6d7c2e65647f2a436acda6601843",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-552434",
	                        "f6579505a3b2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-552434 -n pause-552434
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-552434 -n pause-552434: exit status 2 (428.803502ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-552434 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-552434 logs -n 25: (1.060572542s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-034446 sudo docker system info                                                                                    │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo cri-dockerd --version                                                                                 │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo systemctl cat containerd --no-pager                                                                   │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo cat /etc/containerd/config.toml                                                                       │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo containerd config dump                                                                                │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo systemctl cat crio --no-pager                                                                         │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ -p cilium-034446 sudo crio config                                                                                           │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ delete  │ -p cilium-034446                                                                                                            │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ start   │ -p force-systemd-flag-536692 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-536692 │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ ssh     │ -p NoKubernetes-286873 sudo systemctl is-active --quiet service kubelet                                                     │ NoKubernetes-286873       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ force-systemd-flag-536692 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                        │ force-systemd-flag-536692 │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ delete  │ -p force-systemd-flag-536692                                                                                                │ force-systemd-flag-536692 │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ start   │ -p force-systemd-env-680592 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-680592  │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ start   │ -p pause-552434 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-552434              │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ pause   │ -p pause-552434 --alsologtostderr -v=5                                                                                      │ pause-552434              │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ stop    │ -p NoKubernetes-286873                                                                                                      │ NoKubernetes-286873       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ start   │ -p NoKubernetes-286873 --driver=docker  --container-runtime=crio                                                            │ NoKubernetes-286873       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:02:42
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:02:42.410954  304641 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:02:42.411270  304641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:02:42.411275  304641 out.go:374] Setting ErrFile to fd 2...
	I1018 15:02:42.411281  304641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:02:42.411629  304641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:02:42.412233  304641 out.go:368] Setting JSON to false
	I1018 15:02:42.413996  304641 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9913,"bootTime":1760789849,"procs":414,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:02:42.414085  304641 start.go:141] virtualization: kvm guest
	I1018 15:02:42.418069  304641 out.go:179] * [NoKubernetes-286873] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:02:42.419441  304641 notify.go:220] Checking for updates...
	I1018 15:02:42.425776  304641 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:02:42.429517  304641 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:02:42.430928  304641 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:02:42.432303  304641 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 15:02:42.433850  304641 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:02:42.435162  304641 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:02:42.436846  304641 config.go:182] Loaded profile config "NoKubernetes-286873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1018 15:02:42.437586  304641 start.go:1804] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1018 15:02:42.437611  304641 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:02:42.469164  304641 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:02:42.469317  304641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:02:42.567270  304641 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 15:02:42.557059232 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:02:42.567356  304641 docker.go:318] overlay module found
	I1018 15:02:42.570157  304641 out.go:179] * Using the docker driver based on existing profile
	I1018 15:02:42.571276  304641 start.go:305] selected driver: docker
	I1018 15:02:42.571285  304641 start.go:925] validating driver "docker" against &{Name:NoKubernetes-286873 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-286873 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:02:42.571354  304641 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:02:42.571429  304641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:02:42.639189  304641 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 15:02:42.626299585 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:02:42.640046  304641 cni.go:84] Creating CNI manager for ""
	I1018 15:02:42.640127  304641 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:02:42.640187  304641 start.go:349] cluster config:
	{Name:NoKubernetes-286873 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-286873 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:02:42.641798  304641 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-286873
	I1018 15:02:42.642977  304641 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:02:42.644295  304641 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:02:42.645601  304641 preload.go:183] Checking if preload exists for k8s version v0.0.0 and runtime crio
	I1018 15:02:42.645725  304641 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	W1018 15:02:42.670136  304641 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1018 15:02:42.671811  304641 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:02:42.671824  304641 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	W1018 15:02:42.706405  304641 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1018 15:02:42.706534  304641 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/NoKubernetes-286873/config.json ...
	I1018 15:02:42.706862  304641 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:02:42.706892  304641 start.go:360] acquireMachinesLock for NoKubernetes-286873: {Name:mk1b348f6a9b6b2acc02930462f1f49cc423ee8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:02:42.706999  304641 start.go:364] duration metric: took 58.587µs to acquireMachinesLock for "NoKubernetes-286873"
	I1018 15:02:42.707029  304641 start.go:96] Skipping create...Using existing machine configuration
	I1018 15:02:42.707034  304641 fix.go:54] fixHost starting: 
	I1018 15:02:42.707423  304641 cli_runner.go:164] Run: docker container inspect NoKubernetes-286873 --format={{.State.Status}}
	I1018 15:02:42.731059  304641 fix.go:112] recreateIfNeeded on NoKubernetes-286873: state=Stopped err=<nil>
	W1018 15:02:42.731093  304641 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.399120901Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.399962636Z" level=info msg="Conmon does support the --sync option"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.399987579Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.400005704Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.40072011Z" level=info msg="Conmon does support the --sync option"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.400745069Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.405066102Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.405090262Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.405741283Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.406316366Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.406381098Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.412315455Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.455577369Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-r2jd5 Namespace:kube-system ID:e7d1e2e95184ff8e598139c8d821bcc60376cfc3a57116f31a2582393f250dee UID:59ff8c5d-ba04-4834-90ef-4b06698de3ac NetNS:/var/run/netns/94ed696c-1b43-4e6d-aa7a-9243dfee1d3e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b750}] Aliases:map[]}"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.455762946Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-r2jd5 for CNI network kindnet (type=ptp)"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.456332979Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.456360813Z" level=info msg="Starting seccomp notifier watcher"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.457028662Z" level=info msg="Create NRI interface"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.457977277Z" level=info msg="built-in NRI default validator is disabled"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.458007662Z" level=info msg="runtime interface created"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.458023897Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.458031548Z" level=info msg="runtime interface starting up..."
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.458040241Z" level=info msg="starting plugins..."
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.45806013Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 18 15:02:35 pause-552434 crio[2168]: time="2025-10-18T15:02:35.458459685Z" level=info msg="No systemd watchdog enabled"
	Oct 18 15:02:35 pause-552434 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	83e63972ae32f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago       Running             coredns                   0                   e7d1e2e95184f       coredns-66bc5c9577-r2jd5               kube-system
	e38d3fecd94dc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   53 seconds ago       Running             kindnet-cni               0                   1df0b8156c62c       kindnet-5tfsb                          kube-system
	15f33accd9028       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   53 seconds ago       Running             kube-proxy                0                   ed0e256aef7eb       kube-proxy-kzg2k                       kube-system
	f07b384914dac       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Running             kube-controller-manager   0                   776b02b2ccd8a       kube-controller-manager-pause-552434   kube-system
	309d2698f21c0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Running             kube-apiserver            0                   8c79de007d070       kube-apiserver-pause-552434            kube-system
	9b700ddbcc14c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Running             etcd                      0                   4d78916223262       etcd-pause-552434                      kube-system
	ef0a486a7b857       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Running             kube-scheduler            0                   da39c1c7bf8f7       kube-scheduler-pause-552434            kube-system
	
	
	==> coredns [83e63972ae32f350d8fe81115bad361203b44a85b2d3034e9cffa0da575020a6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34645 - 42471 "HINFO IN 3979858299681464764.7361797870785617365. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.090876185s
	
	
	==> describe nodes <==
	Name:               pause-552434
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-552434
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=pause-552434
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_01_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:01:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-552434
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:02:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:02:34 +0000   Sat, 18 Oct 2025 15:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:02:34 +0000   Sat, 18 Oct 2025 15:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:02:34 +0000   Sat, 18 Oct 2025 15:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 15:02:34 +0000   Sat, 18 Oct 2025 15:02:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-552434
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                9f95a6d0-dc9a-4614-8417-adf252bb5a61
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-r2jd5                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     54s
	  kube-system                 etcd-pause-552434                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         60s
	  kube-system                 kindnet-5tfsb                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-pause-552434             250m (3%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-pause-552434    200m (2%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-kzg2k                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-pause-552434             100m (1%)     0 (0%)      0 (0%)           0 (0%)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 53s                kube-proxy       
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x5 over 65s)  kubelet          Node pause-552434 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x4 over 65s)  kubelet          Node pause-552434 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x4 over 65s)  kubelet          Node pause-552434 status is now: NodeHasSufficientPID
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s                kubelet          Node pause-552434 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s                kubelet          Node pause-552434 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s                kubelet          Node pause-552434 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                node-controller  Node pause-552434 event: Registered Node pause-552434 in Controller
	  Normal  NodeReady                13s                kubelet          Node pause-552434 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [9b700ddbcc14c420cbc0df381ae301fef7d682c6ca669b88a2fa3253285eb9bf] <==
	{"level":"warn","ts":"2025-10-18T15:01:40.729393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.736359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.743081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.752351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.758637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.765026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.771705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.778250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.784542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.800356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.809077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.815434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.822553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.828866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.841308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.847470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.853645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.860778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.867223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.874273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.881252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.894621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.904732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.911739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:01:40.966425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41586","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:02:43 up  2:45,  0 user,  load average: 3.36, 2.17, 1.53
	Linux pause-552434 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e38d3fecd94dce09c272919a6b3a4893b97e5e77a057ca49c43c1ef2d6a20b6a] <==
	I1018 15:01:50.064563       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 15:01:50.064860       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1018 15:01:50.065060       1 main.go:148] setting mtu 1500 for CNI 
	I1018 15:01:50.065087       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 15:01:50.065123       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T15:01:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 15:01:50.271207       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 15:01:50.272421       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 15:01:50.272501       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 15:01:50.272759       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 15:02:20.271506       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 15:02:20.272444       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 15:02:20.272480       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 15:02:20.272487       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1018 15:02:21.773241       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 15:02:21.773277       1 metrics.go:72] Registering metrics
	I1018 15:02:21.773373       1 controller.go:711] "Syncing nftables rules"
	I1018 15:02:30.274046       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 15:02:30.274085       1 main.go:301] handling current node
	I1018 15:02:40.274023       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 15:02:40.274077       1 main.go:301] handling current node
	
	
	==> kube-apiserver [309d2698f21c03ab93a63ac602bee45208d646cc594c6c20bf1b7fde417c6c8e] <==
	I1018 15:01:41.439458       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 15:01:41.439496       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1018 15:01:41.443787       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:01:41.445236       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 15:01:41.452027       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:01:41.452506       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 15:01:41.452533       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 15:01:41.627481       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 15:01:42.335203       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 15:01:42.339005       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 15:01:42.339024       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:01:42.830844       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:01:42.871975       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:01:42.940685       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 15:01:42.951018       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1018 15:01:42.952615       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 15:01:42.957864       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 15:01:43.364088       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 15:01:43.799992       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 15:01:43.817104       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 15:01:43.832573       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 15:01:49.019650       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 15:01:49.220449       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:01:49.226425       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:01:49.467746       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [f07b384914dac3fb4353daeb697eeceec6d1144cacc371aa0b6ae1c6306ecbf2] <==
	I1018 15:01:48.362985       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 15:01:48.363010       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 15:01:48.363028       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 15:01:48.363115       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 15:01:48.363227       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 15:01:48.363255       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 15:01:48.363350       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-552434"
	I1018 15:01:48.363427       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 15:01:48.363795       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 15:01:48.363589       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 15:01:48.363585       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 15:01:48.363705       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 15:01:48.363720       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 15:01:48.363731       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 15:01:48.364528       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 15:01:48.364622       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 15:01:48.365075       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 15:01:48.365559       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 15:01:48.365577       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 15:01:48.367562       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 15:01:48.371510       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 15:01:48.373403       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 15:01:48.381464       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 15:01:48.384323       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 15:02:33.370846       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [15f33accd9028684dd1a5553833f500450d0eb21fbbe9b87813fa2c338823e84] <==
	I1018 15:01:49.905078       1 server_linux.go:53] "Using iptables proxy"
	I1018 15:01:49.965808       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 15:01:50.066379       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 15:01:50.066420       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1018 15:01:50.066554       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 15:01:50.093938       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 15:01:50.094023       1 server_linux.go:132] "Using iptables Proxier"
	I1018 15:01:50.101045       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 15:01:50.101703       1 server.go:527] "Version info" version="v1.34.1"
	I1018 15:01:50.101736       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:01:50.105705       1 config.go:200] "Starting service config controller"
	I1018 15:01:50.105728       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 15:01:50.105765       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 15:01:50.105771       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 15:01:50.105864       1 config.go:309] "Starting node config controller"
	I1018 15:01:50.105880       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 15:01:50.105888       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 15:01:50.106042       1 config.go:106] "Starting endpoint slice config controller"
	I1018 15:01:50.106057       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 15:01:50.205806       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 15:01:50.205932       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 15:01:50.207132       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ef0a486a7b8571324c6d2501c8769bc1892c74983b0a30f02f7e96ff6327aaad] <==
	E1018 15:01:41.388243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 15:01:41.388326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 15:01:41.388348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 15:01:41.388359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 15:01:41.388371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 15:01:41.388389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 15:01:41.388450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 15:01:41.388484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 15:01:41.388549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 15:01:41.388515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 15:01:41.388540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 15:01:41.388628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 15:01:42.194960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 15:01:42.207374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 15:01:42.269631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 15:01:42.308355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 15:01:42.315558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 15:01:42.317455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 15:01:42.361896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 15:01:42.460362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 15:01:42.577019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 15:01:42.581076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 15:01:42.623630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 15:01:42.628865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1018 15:01:45.086297       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 15:01:44 pause-552434 kubelet[1287]: E1018 15:01:44.734262    1287 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-552434\" already exists" pod="kube-system/kube-apiserver-pause-552434"
	Oct 18 15:01:44 pause-552434 kubelet[1287]: I1018 15:01:44.755335    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-552434" podStartSLOduration=1.75531555 podStartE2EDuration="1.75531555s" podCreationTimestamp="2025-10-18 15:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:01:44.755265921 +0000 UTC m=+1.174431095" watchObservedRunningTime="2025-10-18 15:01:44.75531555 +0000 UTC m=+1.174480725"
	Oct 18 15:01:44 pause-552434 kubelet[1287]: I1018 15:01:44.785062    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-552434" podStartSLOduration=1.785040871 podStartE2EDuration="1.785040871s" podCreationTimestamp="2025-10-18 15:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:01:44.772126305 +0000 UTC m=+1.191291478" watchObservedRunningTime="2025-10-18 15:01:44.785040871 +0000 UTC m=+1.204206044"
	Oct 18 15:01:44 pause-552434 kubelet[1287]: I1018 15:01:44.785166    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-552434" podStartSLOduration=1.785158714 podStartE2EDuration="1.785158714s" podCreationTimestamp="2025-10-18 15:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:01:44.784987035 +0000 UTC m=+1.204152208" watchObservedRunningTime="2025-10-18 15:01:44.785158714 +0000 UTC m=+1.204323885"
	Oct 18 15:01:44 pause-552434 kubelet[1287]: I1018 15:01:44.821006    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-552434" podStartSLOduration=1.8209800889999999 podStartE2EDuration="1.820980089s" podCreationTimestamp="2025-10-18 15:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:01:44.800724575 +0000 UTC m=+1.219889734" watchObservedRunningTime="2025-10-18 15:01:44.820980089 +0000 UTC m=+1.240145264"
	Oct 18 15:01:48 pause-552434 kubelet[1287]: I1018 15:01:48.429876    1287 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 15:01:48 pause-552434 kubelet[1287]: I1018 15:01:48.430686    1287 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 15:01:49 pause-552434 kubelet[1287]: I1018 15:01:49.505120    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b01b586-ec2b-4750-8f4d-a3967577be4d-lib-modules\") pod \"kube-proxy-kzg2k\" (UID: \"0b01b586-ec2b-4750-8f4d-a3967577be4d\") " pod="kube-system/kube-proxy-kzg2k"
	Oct 18 15:01:49 pause-552434 kubelet[1287]: I1018 15:01:49.505183    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s64bp\" (UniqueName: \"kubernetes.io/projected/2f8689fe-1eef-4420-8d8f-d4012de98a9d-kube-api-access-s64bp\") pod \"kindnet-5tfsb\" (UID: \"2f8689fe-1eef-4420-8d8f-d4012de98a9d\") " pod="kube-system/kindnet-5tfsb"
	Oct 18 15:01:49 pause-552434 kubelet[1287]: I1018 15:01:49.505219    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f8689fe-1eef-4420-8d8f-d4012de98a9d-xtables-lock\") pod \"kindnet-5tfsb\" (UID: \"2f8689fe-1eef-4420-8d8f-d4012de98a9d\") " pod="kube-system/kindnet-5tfsb"
	Oct 18 15:01:49 pause-552434 kubelet[1287]: I1018 15:01:49.505247    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b01b586-ec2b-4750-8f4d-a3967577be4d-kube-proxy\") pod \"kube-proxy-kzg2k\" (UID: \"0b01b586-ec2b-4750-8f4d-a3967577be4d\") " pod="kube-system/kube-proxy-kzg2k"
	Oct 18 15:01:49 pause-552434 kubelet[1287]: I1018 15:01:49.505267    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2f8689fe-1eef-4420-8d8f-d4012de98a9d-cni-cfg\") pod \"kindnet-5tfsb\" (UID: \"2f8689fe-1eef-4420-8d8f-d4012de98a9d\") " pod="kube-system/kindnet-5tfsb"
	Oct 18 15:01:49 pause-552434 kubelet[1287]: I1018 15:01:49.505288    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f8689fe-1eef-4420-8d8f-d4012de98a9d-lib-modules\") pod \"kindnet-5tfsb\" (UID: \"2f8689fe-1eef-4420-8d8f-d4012de98a9d\") " pod="kube-system/kindnet-5tfsb"
	Oct 18 15:01:49 pause-552434 kubelet[1287]: I1018 15:01:49.505344    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b01b586-ec2b-4750-8f4d-a3967577be4d-xtables-lock\") pod \"kube-proxy-kzg2k\" (UID: \"0b01b586-ec2b-4750-8f4d-a3967577be4d\") " pod="kube-system/kube-proxy-kzg2k"
	Oct 18 15:01:49 pause-552434 kubelet[1287]: I1018 15:01:49.505392    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btvsl\" (UniqueName: \"kubernetes.io/projected/0b01b586-ec2b-4750-8f4d-a3967577be4d-kube-api-access-btvsl\") pod \"kube-proxy-kzg2k\" (UID: \"0b01b586-ec2b-4750-8f4d-a3967577be4d\") " pod="kube-system/kube-proxy-kzg2k"
	Oct 18 15:01:50 pause-552434 kubelet[1287]: I1018 15:01:50.753792    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5tfsb" podStartSLOduration=1.75377274 podStartE2EDuration="1.75377274s" podCreationTimestamp="2025-10-18 15:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:01:50.753663013 +0000 UTC m=+7.172828186" watchObservedRunningTime="2025-10-18 15:01:50.75377274 +0000 UTC m=+7.172937916"
	Oct 18 15:01:50 pause-552434 kubelet[1287]: I1018 15:01:50.763275    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kzg2k" podStartSLOduration=1.763252609 podStartE2EDuration="1.763252609s" podCreationTimestamp="2025-10-18 15:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:01:50.763111713 +0000 UTC m=+7.182276886" watchObservedRunningTime="2025-10-18 15:01:50.763252609 +0000 UTC m=+7.182417780"
	Oct 18 15:02:30 pause-552434 kubelet[1287]: I1018 15:02:30.373834    1287 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 15:02:30 pause-552434 kubelet[1287]: I1018 15:02:30.510028    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg498\" (UniqueName: \"kubernetes.io/projected/59ff8c5d-ba04-4834-90ef-4b06698de3ac-kube-api-access-kg498\") pod \"coredns-66bc5c9577-r2jd5\" (UID: \"59ff8c5d-ba04-4834-90ef-4b06698de3ac\") " pod="kube-system/coredns-66bc5c9577-r2jd5"
	Oct 18 15:02:30 pause-552434 kubelet[1287]: I1018 15:02:30.510134    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59ff8c5d-ba04-4834-90ef-4b06698de3ac-config-volume\") pod \"coredns-66bc5c9577-r2jd5\" (UID: \"59ff8c5d-ba04-4834-90ef-4b06698de3ac\") " pod="kube-system/coredns-66bc5c9577-r2jd5"
	Oct 18 15:02:30 pause-552434 kubelet[1287]: I1018 15:02:30.842345    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-r2jd5" podStartSLOduration=41.842324475 podStartE2EDuration="41.842324475s" podCreationTimestamp="2025-10-18 15:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:02:30.842156194 +0000 UTC m=+47.261321390" watchObservedRunningTime="2025-10-18 15:02:30.842324475 +0000 UTC m=+47.261489647"
	Oct 18 15:02:38 pause-552434 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 15:02:38 pause-552434 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 15:02:38 pause-552434 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 15:02:38 pause-552434 systemd[1]: kubelet.service: Consumed 2.331s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-552434 -n pause-552434
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-552434 -n pause-552434: exit status 2 (351.489384ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-552434 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-948537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-948537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (241.664771ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:04:18Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-948537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-948537 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-948537 describe deploy/metrics-server -n kube-system: exit status 1 (63.168348ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-948537 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-948537
helpers_test.go:243: (dbg) docker inspect old-k8s-version-948537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7",
	        "Created": "2025-10-18T15:03:24.489578766Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 317899,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T15:03:24.525623859Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7/hostname",
	        "HostsPath": "/var/lib/docker/containers/3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7/hosts",
	        "LogPath": "/var/lib/docker/containers/3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7/3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7-json.log",
	        "Name": "/old-k8s-version-948537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-948537:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-948537",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7",
	                "LowerDir": "/var/lib/docker/overlay2/4e9f5ec3d6a3c3a3e9655b5f49c6cec160d792c920c0ef7bc94b940d3800dbd6-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e9f5ec3d6a3c3a3e9655b5f49c6cec160d792c920c0ef7bc94b940d3800dbd6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e9f5ec3d6a3c3a3e9655b5f49c6cec160d792c920c0ef7bc94b940d3800dbd6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e9f5ec3d6a3c3a3e9655b5f49c6cec160d792c920c0ef7bc94b940d3800dbd6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-948537",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-948537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-948537",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-948537",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-948537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "934e5e8a26f8b5388de519f611d5c72f905c95d2322fe251fb0f368b8489ad99",
	            "SandboxKey": "/var/run/docker/netns/934e5e8a26f8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-948537": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:52:00:8d:e6:c3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "61ee9ee46471b491cbfab6422a4dbe2929bd7ab545265cf14dbd822e55ffe7f8",
	                    "EndpointID": "c535a203828d7f98f2dcd6c900ea7b4dcb73cf66ed560893d8d3c86ba5a37915",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-948537",
	                        "3730ae01e013"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-948537 -n old-k8s-version-948537
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-948537 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-948537 logs -n 25: (1.064791514s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p cilium-034446                                                                                                                                                                                                                              │ cilium-034446             │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ start   │ -p force-systemd-flag-536692 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-536692 │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ ssh     │ -p NoKubernetes-286873 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-286873       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ ssh     │ force-systemd-flag-536692 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-536692 │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ delete  │ -p force-systemd-flag-536692                                                                                                                                                                                                                  │ force-systemd-flag-536692 │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ start   │ -p force-systemd-env-680592 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-680592  │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ start   │ -p pause-552434 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-552434              │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ pause   │ -p pause-552434 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-552434              │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ stop    │ -p NoKubernetes-286873                                                                                                                                                                                                                        │ NoKubernetes-286873       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ start   │ -p NoKubernetes-286873 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-286873       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ delete  │ -p pause-552434                                                                                                                                                                                                                               │ pause-552434              │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ start   │ -p cert-expiration-327346 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-327346    │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:03 UTC │
	│ delete  │ -p force-systemd-env-680592                                                                                                                                                                                                                   │ force-systemd-env-680592  │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ ssh     │ -p NoKubernetes-286873 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-286873       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ delete  │ -p NoKubernetes-286873                                                                                                                                                                                                                        │ NoKubernetes-286873       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ start   │ -p cert-options-648086 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-648086       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:03 UTC │
	│ start   │ -p missing-upgrade-635158 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-635158    │ jenkins │ v1.32.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:03 UTC │
	│ ssh     │ cert-options-648086 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-648086       │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:03 UTC │
	│ ssh     │ -p cert-options-648086 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-648086       │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:03 UTC │
	│ delete  │ -p cert-options-648086                                                                                                                                                                                                                        │ cert-options-648086       │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:03 UTC │
	│ start   │ -p old-k8s-version-948537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:04 UTC │
	│ start   │ -p missing-upgrade-635158 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-635158    │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:04 UTC │
	│ delete  │ -p missing-upgrade-635158                                                                                                                                                                                                                     │ missing-upgrade-635158    │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ start   │ -p no-preload-165275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165275         │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-948537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:04:13
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:04:13.304278  326380 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:04:13.304561  326380 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:04:13.304572  326380 out.go:374] Setting ErrFile to fd 2...
	I1018 15:04:13.304578  326380 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:04:13.304799  326380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:04:13.305341  326380 out.go:368] Setting JSON to false
	I1018 15:04:13.306679  326380 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10004,"bootTime":1760789849,"procs":424,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:04:13.306768  326380 start.go:141] virtualization: kvm guest
	I1018 15:04:13.308800  326380 out.go:179] * [no-preload-165275] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:04:13.310126  326380 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:04:13.310131  326380 notify.go:220] Checking for updates...
	I1018 15:04:13.311415  326380 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:04:13.312658  326380 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:04:13.314043  326380 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 15:04:13.315255  326380 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:04:13.316586  326380 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:04:13.318368  326380 config.go:182] Loaded profile config "cert-expiration-327346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:04:13.318487  326380 config.go:182] Loaded profile config "kubernetes-upgrade-833162": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:04:13.318597  326380 config.go:182] Loaded profile config "old-k8s-version-948537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 15:04:13.318717  326380 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:04:13.341953  326380 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:04:13.342111  326380 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:04:13.398662  326380 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-18 15:04:13.389352083 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:04:13.398806  326380 docker.go:318] overlay module found
	I1018 15:04:13.400752  326380 out.go:179] * Using the docker driver based on user configuration
	I1018 15:04:13.402129  326380 start.go:305] selected driver: docker
	I1018 15:04:13.402148  326380 start.go:925] validating driver "docker" against <nil>
	I1018 15:04:13.402162  326380 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:04:13.402753  326380 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:04:13.461652  326380 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-18 15:04:13.450718204 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:04:13.461824  326380 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 15:04:13.462078  326380 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:04:13.463637  326380 out.go:179] * Using Docker driver with root privileges
	I1018 15:04:13.464641  326380 cni.go:84] Creating CNI manager for ""
	I1018 15:04:13.464699  326380 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:04:13.464712  326380 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 15:04:13.464774  326380 start.go:349] cluster config:
	{Name:no-preload-165275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-165275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:04:13.465962  326380 out.go:179] * Starting "no-preload-165275" primary control-plane node in "no-preload-165275" cluster
	I1018 15:04:13.467076  326380 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:04:13.468194  326380 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:04:13.469254  326380 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:04:13.469350  326380 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 15:04:13.469368  326380 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/config.json ...
	I1018 15:04:13.469416  326380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/config.json: {Name:mk223870242868a8e50258451093343bebcd8d30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:04:13.469601  326380 cache.go:107] acquiring lock: {Name:mkcd0e2847def5d7525f56b72d40ef8eb4661666 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:04:13.469604  326380 cache.go:107] acquiring lock: {Name:mk12de1c820b10b304bb440284c1b6916a987889 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:04:13.469601  326380 cache.go:107] acquiring lock: {Name:mkecab1d576a5cee47304bc15dc72f9970f45c8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:04:13.469668  326380 cache.go:107] acquiring lock: {Name:mk1d022df204329fecb8dfdd48f2e6a2af0f3a7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:04:13.469666  326380 cache.go:107] acquiring lock: {Name:mkd6be508b79cf0b608e0017623eb5fbcb6b5bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:04:13.469748  326380 cache.go:115] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 15:04:13.469761  326380 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 175.176µs
	I1018 15:04:13.469776  326380 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 15:04:13.469604  326380 cache.go:107] acquiring lock: {Name:mk314bda0d4e90238c0ed6d4b64ac6d98bf9f0e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:04:13.469751  326380 cache.go:107] acquiring lock: {Name:mk72463510bc510f518ea67b24aec16a4002f6be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:04:13.469769  326380 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 15:04:13.469824  326380 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 15:04:13.469752  326380 cache.go:107] acquiring lock: {Name:mkbaa1a4bd6915358a4926d0351a0e021f54346d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:04:13.469875  326380 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1018 15:04:13.469930  326380 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 15:04:13.469814  326380 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 15:04:13.469953  326380 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1018 15:04:13.470019  326380 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 15:04:13.471158  326380 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1018 15:04:13.471158  326380 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1018 15:04:13.471172  326380 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 15:04:13.471175  326380 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 15:04:13.471179  326380 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 15:04:13.471316  326380 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 15:04:13.471410  326380 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 15:04:13.492865  326380 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:04:13.492891  326380 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 15:04:13.492939  326380 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:04:13.492970  326380 start.go:360] acquireMachinesLock for no-preload-165275: {Name:mk24a38ac6e4e8fc6cc6d51b67ac49da84578c77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:04:13.493086  326380 start.go:364] duration metric: took 91.033µs to acquireMachinesLock for "no-preload-165275"
	I1018 15:04:13.493129  326380 start.go:93] Provisioning new machine with config: &{Name:no-preload-165275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-165275 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:04:13.493236  326380 start.go:125] createHost starting for "" (driver="docker")
	I1018 15:04:13.495315  326380 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 15:04:13.495514  326380 start.go:159] libmachine.API.Create for "no-preload-165275" (driver="docker")
	I1018 15:04:13.495541  326380 client.go:168] LocalClient.Create starting
	I1018 15:04:13.495587  326380 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem
	I1018 15:04:13.495617  326380 main.go:141] libmachine: Decoding PEM data...
	I1018 15:04:13.495635  326380 main.go:141] libmachine: Parsing certificate...
	I1018 15:04:13.495696  326380 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem
	I1018 15:04:13.495719  326380 main.go:141] libmachine: Decoding PEM data...
	I1018 15:04:13.495728  326380 main.go:141] libmachine: Parsing certificate...
	I1018 15:04:13.496085  326380 cli_runner.go:164] Run: docker network inspect no-preload-165275 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 15:04:13.514423  326380 cli_runner.go:211] docker network inspect no-preload-165275 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 15:04:13.514506  326380 network_create.go:284] running [docker network inspect no-preload-165275] to gather additional debugging logs...
	I1018 15:04:13.514534  326380 cli_runner.go:164] Run: docker network inspect no-preload-165275
	W1018 15:04:13.531385  326380 cli_runner.go:211] docker network inspect no-preload-165275 returned with exit code 1
	I1018 15:04:13.531411  326380 network_create.go:287] error running [docker network inspect no-preload-165275]: docker network inspect no-preload-165275: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-165275 not found
	I1018 15:04:13.531422  326380 network_create.go:289] output of [docker network inspect no-preload-165275]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-165275 not found
	
	** /stderr **
	I1018 15:04:13.531504  326380 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:04:13.549263  326380 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-67ded9675d49 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:eb:89:76:0f:a6} reservation:<nil>}
	I1018 15:04:13.549686  326380 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4b365c92bc46 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:db:b6:83:36:75} reservation:<nil>}
	I1018 15:04:13.550096  326380 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ab6063c7cdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:eb:32:cc:ab:b4} reservation:<nil>}
	I1018 15:04:13.550512  326380 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5528bde5ee94 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a6:2b:9f:5f:61:b1} reservation:<nil>}
	I1018 15:04:13.551123  326380 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00082c930}
	I1018 15:04:13.551145  326380 network_create.go:124] attempt to create docker network no-preload-165275 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 15:04:13.551213  326380 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-165275 no-preload-165275
	I1018 15:04:13.611931  326380 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1018 15:04:13.612386  326380 network_create.go:108] docker network no-preload-165275 192.168.85.0/24 created
	I1018 15:04:13.612417  326380 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-165275" container
	I1018 15:04:13.612467  326380 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 15:04:13.615659  326380 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1018 15:04:13.617094  326380 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1018 15:04:13.618404  326380 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1018 15:04:13.630949  326380 cli_runner.go:164] Run: docker volume create no-preload-165275 --label name.minikube.sigs.k8s.io=no-preload-165275 --label created_by.minikube.sigs.k8s.io=true
	I1018 15:04:13.641649  326380 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1018 15:04:13.642430  326380 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1018 15:04:13.649425  326380 oci.go:103] Successfully created a docker volume no-preload-165275
	I1018 15:04:13.649483  326380 cli_runner.go:164] Run: docker run --rm --name no-preload-165275-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-165275 --entrypoint /usr/bin/test -v no-preload-165275:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 15:04:13.651423  326380 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1018 15:04:13.708206  326380 cache.go:157] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1018 15:04:13.708232  326380 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 238.618478ms
	I1018 15:04:13.708246  326380 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 15:04:14.003977  326380 cache.go:157] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 15:04:14.004009  326380 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 534.310029ms
	I1018 15:04:14.004030  326380 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 15:04:14.097396  326380 oci.go:107] Successfully prepared a docker volume no-preload-165275
	I1018 15:04:14.097426  326380 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1018 15:04:14.097528  326380 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 15:04:14.097591  326380 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 15:04:14.097640  326380 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 15:04:14.157742  326380 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-165275 --name no-preload-165275 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-165275 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-165275 --network no-preload-165275 --ip 192.168.85.2 --volume no-preload-165275:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 15:04:14.428698  326380 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Running}}
	I1018 15:04:14.447871  326380 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:04:14.465907  326380 cli_runner.go:164] Run: docker exec no-preload-165275 stat /var/lib/dpkg/alternatives/iptables
	I1018 15:04:14.515096  326380 oci.go:144] the created container "no-preload-165275" has a running status.
	I1018 15:04:14.515135  326380 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa...
	I1018 15:04:14.585656  326380 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 15:04:14.611262  326380 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:04:14.630707  326380 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 15:04:14.630731  326380 kic_runner.go:114] Args: [docker exec --privileged no-preload-165275 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 15:04:14.678765  326380 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:04:14.698868  326380 machine.go:93] provisionDockerMachine start ...
	I1018 15:04:14.698993  326380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:04:14.723542  326380 main.go:141] libmachine: Using SSH client type: native
	I1018 15:04:14.723881  326380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1018 15:04:14.723899  326380 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:04:14.726708  326380 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 15:04:14.945812  326380 cache.go:157] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 15:04:14.945844  326380 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.476266286s
	I1018 15:04:14.945868  326380 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 15:04:14.978504  326380 cache.go:157] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 15:04:14.978538  326380 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.508870463s
	I1018 15:04:14.978556  326380 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 15:04:14.993956  326380 cache.go:157] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 15:04:14.993986  326380 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.524393714s
	I1018 15:04:14.994001  326380 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 15:04:15.061424  326380 cache.go:157] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 15:04:15.061451  326380 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.59186476s
	I1018 15:04:15.061467  326380 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 15:04:15.328987  326380 cache.go:157] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 15:04:15.329020  326380 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 1.859313278s
	I1018 15:04:15.329037  326380 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 15:04:15.329060  326380 cache.go:87] Successfully saved all images to host disk.
	I1018 15:04:17.866358  326380 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-165275
	
	I1018 15:04:17.866390  326380 ubuntu.go:182] provisioning hostname "no-preload-165275"
	I1018 15:04:17.866462  326380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:04:17.885999  326380 main.go:141] libmachine: Using SSH client type: native
	I1018 15:04:17.886288  326380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1018 15:04:17.886303  326380 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-165275 && echo "no-preload-165275" | sudo tee /etc/hostname
	I1018 15:04:18.034194  326380 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-165275
	
	I1018 15:04:18.034284  326380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:04:18.052997  326380 main.go:141] libmachine: Using SSH client type: native
	I1018 15:04:18.053289  326380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1018 15:04:18.053322  326380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-165275' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-165275/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-165275' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:04:18.192069  326380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:04:18.192101  326380 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 15:04:18.192130  326380 ubuntu.go:190] setting up certificates
	I1018 15:04:18.192149  326380 provision.go:84] configureAuth start
	I1018 15:04:18.192220  326380 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-165275
	I1018 15:04:18.214001  326380 provision.go:143] copyHostCerts
	I1018 15:04:18.214067  326380 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem, removing ...
	I1018 15:04:18.214075  326380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:04:18.214151  326380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 15:04:18.214240  326380 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem, removing ...
	I1018 15:04:18.214249  326380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:04:18.214284  326380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 15:04:18.214371  326380 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem, removing ...
	I1018 15:04:18.214381  326380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:04:18.214410  326380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 15:04:18.214466  326380 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.no-preload-165275 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-165275]
	I1018 15:04:18.255900  326380 provision.go:177] copyRemoteCerts
	I1018 15:04:18.255964  326380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:04:18.255997  326380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:04:18.276275  326380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa Username:docker}
	
	
	==> CRI-O <==
	Oct 18 15:04:04 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:04.752195532Z" level=info msg="Starting container: 783931a97182fed7f3e82622a0a79982efdac98890a12b1302afbdf16ec042ba" id=9a4edf54-87d1-4a90-8e82-71195af5f5f3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:04:04 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:04.754292408Z" level=info msg="Started container" PID=2167 containerID=783931a97182fed7f3e82622a0a79982efdac98890a12b1302afbdf16ec042ba description=kube-system/coredns-5dd5756b68-j8xvf/coredns id=9a4edf54-87d1-4a90-8e82-71195af5f5f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c2dad3f49615c1fada2c8deb421297a2e36abfa239f2f939f10f62b227870651
	Oct 18 15:04:08 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:08.062845071Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4201a9b4-3e36-4f05-bf4d-bfa5e2d7b6e4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:04:08 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:08.06314537Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:04:08 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:08.07422057Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4f1c55f3972d5350386b2aa2531e38d240b272a514ec67722090163eef17a101 UID:feac9cfc-147a-4085-b9f8-9cf69c26bba9 NetNS:/var/run/netns/66cdda2e-42d5-4af2-ac15-a9088488f5e9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006ea838}] Aliases:map[]}"
	Oct 18 15:04:08 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:08.074265531Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 15:04:08 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:08.091597477Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4f1c55f3972d5350386b2aa2531e38d240b272a514ec67722090163eef17a101 UID:feac9cfc-147a-4085-b9f8-9cf69c26bba9 NetNS:/var/run/netns/66cdda2e-42d5-4af2-ac15-a9088488f5e9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006ea838}] Aliases:map[]}"
	Oct 18 15:04:08 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:08.091729153Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 15:04:08 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:08.092647938Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 15:04:08 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:08.093887572Z" level=info msg="Ran pod sandbox 4f1c55f3972d5350386b2aa2531e38d240b272a514ec67722090163eef17a101 with infra container: default/busybox/POD" id=4201a9b4-3e36-4f05-bf4d-bfa5e2d7b6e4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:04:08 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:08.095453672Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1b477546-aad5-40a2-92a7-88ad7cab0ed9 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:04:08 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:08.095725813Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1b477546-aad5-40a2-92a7-88ad7cab0ed9 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:04:08 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:08.095782068Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=1b477546-aad5-40a2-92a7-88ad7cab0ed9 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:04:08 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:08.096926935Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=69a809ae-abee-479c-85d3-c3efff04d5f4 name=/runtime.v1.ImageService/PullImage
	Oct 18 15:04:08 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:08.101229405Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 15:04:10 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:10.184567251Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=69a809ae-abee-479c-85d3-c3efff04d5f4 name=/runtime.v1.ImageService/PullImage
	Oct 18 15:04:10 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:10.185561394Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=25bba374-5195-4d48-9572-270d197766d3 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:04:10 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:10.187320181Z" level=info msg="Creating container: default/busybox/busybox" id=92a828c2-3376-48cf-bf7c-1b3b9377569a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:04:10 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:10.188118648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:04:10 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:10.191903191Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:04:10 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:10.192361129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:04:10 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:10.225679098Z" level=info msg="Created container 83a5682658ab1e25b5fe4f4b91d2ca24050853397dcf279a04e869a256b489ac: default/busybox/busybox" id=92a828c2-3376-48cf-bf7c-1b3b9377569a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:04:10 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:10.227068248Z" level=info msg="Starting container: 83a5682658ab1e25b5fe4f4b91d2ca24050853397dcf279a04e869a256b489ac" id=23db8813-f83c-45bf-9c17-6d904c2be893 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:04:10 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:10.229268831Z" level=info msg="Started container" PID=2241 containerID=83a5682658ab1e25b5fe4f4b91d2ca24050853397dcf279a04e869a256b489ac description=default/busybox/busybox id=23db8813-f83c-45bf-9c17-6d904c2be893 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f1c55f3972d5350386b2aa2531e38d240b272a514ec67722090163eef17a101
	Oct 18 15:04:17 old-k8s-version-948537 crio[775]: time="2025-10-18T15:04:17.838642155Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	83a5682658ab1       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   4f1c55f3972d5       busybox                                          default
	783931a97182f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      14 seconds ago      Running             coredns                   0                   c2dad3f49615c       coredns-5dd5756b68-j8xvf                         kube-system
	6c09178ab5358       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   5048c3c34b7cc       storage-provisioner                              kube-system
	ae5375c14705c       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    25 seconds ago      Running             kindnet-cni               0                   c329ff395e6ee       kindnet-xwd4j                                    kube-system
	595e7ce3fc8ba       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      28 seconds ago      Running             kube-proxy                0                   dfbc35b7261bc       kube-proxy-kwt74                                 kube-system
	9df832e2d175e       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      46 seconds ago      Running             kube-controller-manager   0                   750c2e3fe4c55       kube-controller-manager-old-k8s-version-948537   kube-system
	751b6859e67ca       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      46 seconds ago      Running             kube-apiserver            0                   993dd7323e31d       kube-apiserver-old-k8s-version-948537            kube-system
	057456e9189a0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      46 seconds ago      Running             etcd                      0                   b27f62ab288ce       etcd-old-k8s-version-948537                      kube-system
	6c6746cbb7376       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      46 seconds ago      Running             kube-scheduler            0                   e03b2b9046134       kube-scheduler-old-k8s-version-948537            kube-system
	
	
	==> coredns [783931a97182fed7f3e82622a0a79982efdac98890a12b1302afbdf16ec042ba] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37652 - 43456 "HINFO IN 7872864993281967421.1319961835948866602. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.112013174s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-948537
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-948537
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=old-k8s-version-948537
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_03_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:03:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-948537
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:04:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:04:08 +0000   Sat, 18 Oct 2025 15:03:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:04:08 +0000   Sat, 18 Oct 2025 15:03:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:04:08 +0000   Sat, 18 Oct 2025 15:03:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 15:04:08 +0000   Sat, 18 Oct 2025 15:04:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-948537
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                47943eca-9697-4781-a55f-5b00086edf55
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-j8xvf                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-old-k8s-version-948537                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         42s
	  kube-system                 kindnet-xwd4j                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-948537             250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-old-k8s-version-948537    200m (2%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-proxy-kwt74                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-948537             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  Starting                 47s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node old-k8s-version-948537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node old-k8s-version-948537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node old-k8s-version-948537 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s                kubelet          Node old-k8s-version-948537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s                kubelet          Node old-k8s-version-948537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s                kubelet          Node old-k8s-version-948537 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-948537 event: Registered Node old-k8s-version-948537 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-948537 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [057456e9189a08fd9084fba8ad0ff2c293d181c74d3a1ed488e2af5b20372540] <==
	{"level":"info","ts":"2025-10-18T15:03:32.681434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-10-18T15:03:32.681604Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-10-18T15:03:32.683075Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T15:03:32.683219Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-18T15:03:32.683292Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-18T15:03:32.683316Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T15:03:32.683355Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T15:03:33.671827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-18T15:03:33.671907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-18T15:03:33.671968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-10-18T15:03:33.671994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-10-18T15:03:33.672004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-18T15:03:33.672018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-10-18T15:03:33.672031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-18T15:03:33.672871Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T15:03:33.673559Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T15:03:33.673559Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-948537 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T15:03:33.673613Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T15:03:33.673833Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T15:03:33.673949Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T15:03:33.673906Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T15:03:33.673988Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T15:03:33.673993Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T15:03:33.674781Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-10-18T15:03:33.674795Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 15:04:19 up  2:46,  0 user,  load average: 2.38, 2.30, 1.65
	Linux old-k8s-version-948537 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ae5375c14705c563dd1b559a143e49778a1f217b47cb83e372c4e00611b9ec60] <==
	I1018 15:03:53.748489       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 15:03:53.748864       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 15:03:53.749036       1 main.go:148] setting mtu 1500 for CNI 
	I1018 15:03:53.749056       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 15:03:53.749082       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T15:03:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 15:03:53.949045       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 15:03:53.949183       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 15:03:53.949200       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 15:03:53.949472       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 15:03:54.349465       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 15:03:54.349507       1 metrics.go:72] Registering metrics
	I1018 15:03:54.349584       1 controller.go:711] "Syncing nftables rules"
	I1018 15:04:03.957028       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:04:03.957108       1 main.go:301] handling current node
	I1018 15:04:13.950038       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:04:13.950148       1 main.go:301] handling current node
	
	
	==> kube-apiserver [751b6859e67cab78c068be92f76bf074223d59a6a1934d714442e9203e822dec] <==
	I1018 15:03:34.722155       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 15:03:34.722893       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 15:03:34.722984       1 aggregator.go:166] initial CRD sync complete...
	I1018 15:03:34.723001       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 15:03:34.723008       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 15:03:34.723017       1 cache.go:39] Caches are synced for autoregister controller
	I1018 15:03:34.723782       1 shared_informer.go:318] Caches are synced for configmaps
	I1018 15:03:34.737904       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1018 15:03:34.924002       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 15:03:35.622550       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 15:03:35.626091       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 15:03:35.626110       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:03:36.002373       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:03:36.037266       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:03:36.128279       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 15:03:36.133994       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1018 15:03:36.135000       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 15:03:36.140723       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 15:03:36.666704       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 15:03:37.589246       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 15:03:37.603677       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 15:03:37.614415       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1018 15:03:49.935592       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 15:03:50.733154       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1018 15:03:50.733155       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [9df832e2d175e770053d65836dcf1b8a4e375280ffcb875c90f5b3977a621fdd] <==
	I1018 15:03:50.083963       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1018 15:03:50.084408       1 shared_informer.go:318] Caches are synced for resource quota
	I1018 15:03:50.401946       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 15:03:50.430294       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 15:03:50.430326       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 15:03:50.741018       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xwd4j"
	I1018 15:03:50.742966       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kwt74"
	I1018 15:03:50.785586       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-jrbj7"
	I1018 15:03:50.790382       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-j8xvf"
	I1018 15:03:50.796813       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="856.951664ms"
	I1018 15:03:50.801947       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.050269ms"
	I1018 15:03:50.802044       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.538µs"
	I1018 15:03:50.803527       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="125.398µs"
	I1018 15:03:51.555550       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1018 15:03:51.570345       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-jrbj7"
	I1018 15:03:51.582887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.564167ms"
	I1018 15:03:51.598246       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.199211ms"
	I1018 15:03:51.610556       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.256393ms"
	I1018 15:03:51.610662       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.032µs"
	I1018 15:04:04.384532       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="116.942µs"
	I1018 15:04:04.403046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="132.111µs"
	I1018 15:04:04.786366       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="140.848µs"
	I1018 15:04:04.883751       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1018 15:04:05.787761       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.06453ms"
	I1018 15:04:05.787881       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.114µs"
	
	
	==> kube-proxy [595e7ce3fc8bacf4788bc7f70b4c8b246ff4ce3d1689b709c82a82918d3a42b7] <==
	I1018 15:03:51.146345       1 server_others.go:69] "Using iptables proxy"
	I1018 15:03:51.159875       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1018 15:03:51.214360       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 15:03:51.219689       1 server_others.go:152] "Using iptables Proxier"
	I1018 15:03:51.219798       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 15:03:51.219844       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 15:03:51.219896       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 15:03:51.220239       1 server.go:846] "Version info" version="v1.28.0"
	I1018 15:03:51.220507       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:03:51.221794       1 config.go:188] "Starting service config controller"
	I1018 15:03:51.223048       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 15:03:51.222631       1 config.go:315] "Starting node config controller"
	I1018 15:03:51.223359       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 15:03:51.222655       1 config.go:97] "Starting endpoint slice config controller"
	I1018 15:03:51.223384       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 15:03:51.323722       1 shared_informer.go:318] Caches are synced for service config
	I1018 15:03:51.323794       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1018 15:03:51.323827       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6c6746cbb73769c0b2cbf072dbbd56d1a2e8fd0ad4c09e0901137657cdd13659] <==
	W1018 15:03:34.685414       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1018 15:03:34.685434       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1018 15:03:34.685560       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1018 15:03:34.685583       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1018 15:03:35.530814       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1018 15:03:35.530846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1018 15:03:35.533029       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1018 15:03:35.533051       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1018 15:03:35.584052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1018 15:03:35.584088       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1018 15:03:35.662427       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1018 15:03:35.662466       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1018 15:03:35.676505       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1018 15:03:35.676539       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1018 15:03:35.806503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1018 15:03:35.806540       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1018 15:03:35.840395       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1018 15:03:35.840424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1018 15:03:35.858964       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1018 15:03:35.858998       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1018 15:03:35.871292       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1018 15:03:35.871331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1018 15:03:36.025295       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1018 15:03:36.025328       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1018 15:03:39.281495       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 15:03:49 old-k8s-version-948537 kubelet[1389]: I1018 15:03:49.928339    1389 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 15:03:49 old-k8s-version-948537 kubelet[1389]: I1018 15:03:49.929165    1389 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 15:03:50 old-k8s-version-948537 kubelet[1389]: I1018 15:03:50.746348    1389 topology_manager.go:215] "Topology Admit Handler" podUID="21ae3860-2d55-4c5c-8e1a-19ad2fe19dc6" podNamespace="kube-system" podName="kindnet-xwd4j"
	Oct 18 15:03:50 old-k8s-version-948537 kubelet[1389]: I1018 15:03:50.748227    1389 topology_manager.go:215] "Topology Admit Handler" podUID="e0a3d7d2-09ef-478b-85b5-f07938fcc069" podNamespace="kube-system" podName="kube-proxy-kwt74"
	Oct 18 15:03:50 old-k8s-version-948537 kubelet[1389]: I1018 15:03:50.842079    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21ae3860-2d55-4c5c-8e1a-19ad2fe19dc6-lib-modules\") pod \"kindnet-xwd4j\" (UID: \"21ae3860-2d55-4c5c-8e1a-19ad2fe19dc6\") " pod="kube-system/kindnet-xwd4j"
	Oct 18 15:03:50 old-k8s-version-948537 kubelet[1389]: I1018 15:03:50.842148    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e0a3d7d2-09ef-478b-85b5-f07938fcc069-kube-proxy\") pod \"kube-proxy-kwt74\" (UID: \"e0a3d7d2-09ef-478b-85b5-f07938fcc069\") " pod="kube-system/kube-proxy-kwt74"
	Oct 18 15:03:50 old-k8s-version-948537 kubelet[1389]: I1018 15:03:50.842178    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21ae3860-2d55-4c5c-8e1a-19ad2fe19dc6-xtables-lock\") pod \"kindnet-xwd4j\" (UID: \"21ae3860-2d55-4c5c-8e1a-19ad2fe19dc6\") " pod="kube-system/kindnet-xwd4j"
	Oct 18 15:03:50 old-k8s-version-948537 kubelet[1389]: I1018 15:03:50.842210    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmtgk\" (UniqueName: \"kubernetes.io/projected/21ae3860-2d55-4c5c-8e1a-19ad2fe19dc6-kube-api-access-vmtgk\") pod \"kindnet-xwd4j\" (UID: \"21ae3860-2d55-4c5c-8e1a-19ad2fe19dc6\") " pod="kube-system/kindnet-xwd4j"
	Oct 18 15:03:50 old-k8s-version-948537 kubelet[1389]: I1018 15:03:50.842243    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0a3d7d2-09ef-478b-85b5-f07938fcc069-xtables-lock\") pod \"kube-proxy-kwt74\" (UID: \"e0a3d7d2-09ef-478b-85b5-f07938fcc069\") " pod="kube-system/kube-proxy-kwt74"
	Oct 18 15:03:50 old-k8s-version-948537 kubelet[1389]: I1018 15:03:50.842279    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88djr\" (UniqueName: \"kubernetes.io/projected/e0a3d7d2-09ef-478b-85b5-f07938fcc069-kube-api-access-88djr\") pod \"kube-proxy-kwt74\" (UID: \"e0a3d7d2-09ef-478b-85b5-f07938fcc069\") " pod="kube-system/kube-proxy-kwt74"
	Oct 18 15:03:50 old-k8s-version-948537 kubelet[1389]: I1018 15:03:50.842337    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0a3d7d2-09ef-478b-85b5-f07938fcc069-lib-modules\") pod \"kube-proxy-kwt74\" (UID: \"e0a3d7d2-09ef-478b-85b5-f07938fcc069\") " pod="kube-system/kube-proxy-kwt74"
	Oct 18 15:03:50 old-k8s-version-948537 kubelet[1389]: I1018 15:03:50.842386    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/21ae3860-2d55-4c5c-8e1a-19ad2fe19dc6-cni-cfg\") pod \"kindnet-xwd4j\" (UID: \"21ae3860-2d55-4c5c-8e1a-19ad2fe19dc6\") " pod="kube-system/kindnet-xwd4j"
	Oct 18 15:03:51 old-k8s-version-948537 kubelet[1389]: I1018 15:03:51.733397    1389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kwt74" podStartSLOduration=1.733341325 podCreationTimestamp="2025-10-18 15:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:03:51.733084074 +0000 UTC m=+14.171190665" watchObservedRunningTime="2025-10-18 15:03:51.733341325 +0000 UTC m=+14.171447917"
	Oct 18 15:03:53 old-k8s-version-948537 kubelet[1389]: I1018 15:03:53.741698    1389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-xwd4j" podStartSLOduration=1.362332705 podCreationTimestamp="2025-10-18 15:03:50 +0000 UTC" firstStartedPulling="2025-10-18 15:03:51.056700678 +0000 UTC m=+13.494807265" lastFinishedPulling="2025-10-18 15:03:53.436006906 +0000 UTC m=+15.874113488" observedRunningTime="2025-10-18 15:03:53.741475269 +0000 UTC m=+16.179581861" watchObservedRunningTime="2025-10-18 15:03:53.741638928 +0000 UTC m=+16.179745519"
	Oct 18 15:04:04 old-k8s-version-948537 kubelet[1389]: I1018 15:04:04.353662    1389 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 18 15:04:04 old-k8s-version-948537 kubelet[1389]: I1018 15:04:04.381950    1389 topology_manager.go:215] "Topology Admit Handler" podUID="309bbd8a-c9c8-4f67-b838-aa0c230f04c5" podNamespace="kube-system" podName="storage-provisioner"
	Oct 18 15:04:04 old-k8s-version-948537 kubelet[1389]: I1018 15:04:04.383794    1389 topology_manager.go:215] "Topology Admit Handler" podUID="a4cd643f-8ca1-45d8-90e5-e114506edbee" podNamespace="kube-system" podName="coredns-5dd5756b68-j8xvf"
	Oct 18 15:04:04 old-k8s-version-948537 kubelet[1389]: I1018 15:04:04.547406    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgfsf\" (UniqueName: \"kubernetes.io/projected/a4cd643f-8ca1-45d8-90e5-e114506edbee-kube-api-access-tgfsf\") pod \"coredns-5dd5756b68-j8xvf\" (UID: \"a4cd643f-8ca1-45d8-90e5-e114506edbee\") " pod="kube-system/coredns-5dd5756b68-j8xvf"
	Oct 18 15:04:04 old-k8s-version-948537 kubelet[1389]: I1018 15:04:04.547470    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/309bbd8a-c9c8-4f67-b838-aa0c230f04c5-tmp\") pod \"storage-provisioner\" (UID: \"309bbd8a-c9c8-4f67-b838-aa0c230f04c5\") " pod="kube-system/storage-provisioner"
	Oct 18 15:04:04 old-k8s-version-948537 kubelet[1389]: I1018 15:04:04.547569    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4cd643f-8ca1-45d8-90e5-e114506edbee-config-volume\") pod \"coredns-5dd5756b68-j8xvf\" (UID: \"a4cd643f-8ca1-45d8-90e5-e114506edbee\") " pod="kube-system/coredns-5dd5756b68-j8xvf"
	Oct 18 15:04:04 old-k8s-version-948537 kubelet[1389]: I1018 15:04:04.547651    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2k8m\" (UniqueName: \"kubernetes.io/projected/309bbd8a-c9c8-4f67-b838-aa0c230f04c5-kube-api-access-d2k8m\") pod \"storage-provisioner\" (UID: \"309bbd8a-c9c8-4f67-b838-aa0c230f04c5\") " pod="kube-system/storage-provisioner"
	Oct 18 15:04:04 old-k8s-version-948537 kubelet[1389]: I1018 15:04:04.772019    1389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.771968854 podCreationTimestamp="2025-10-18 15:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:04:04.771702217 +0000 UTC m=+27.209808810" watchObservedRunningTime="2025-10-18 15:04:04.771968854 +0000 UTC m=+27.210075445"
	Oct 18 15:04:04 old-k8s-version-948537 kubelet[1389]: I1018 15:04:04.785170    1389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-j8xvf" podStartSLOduration=14.785117595 podCreationTimestamp="2025-10-18 15:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:04:04.784801026 +0000 UTC m=+27.222907616" watchObservedRunningTime="2025-10-18 15:04:04.785117595 +0000 UTC m=+27.223224186"
	Oct 18 15:04:07 old-k8s-version-948537 kubelet[1389]: I1018 15:04:07.757723    1389 topology_manager.go:215] "Topology Admit Handler" podUID="feac9cfc-147a-4085-b9f8-9cf69c26bba9" podNamespace="default" podName="busybox"
	Oct 18 15:04:07 old-k8s-version-948537 kubelet[1389]: I1018 15:04:07.866489    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzs9k\" (UniqueName: \"kubernetes.io/projected/feac9cfc-147a-4085-b9f8-9cf69c26bba9-kube-api-access-fzs9k\") pod \"busybox\" (UID: \"feac9cfc-147a-4085-b9f8-9cf69c26bba9\") " pod="default/busybox"
	
	
	==> storage-provisioner [6c09178ab5358a302e482d19520e322e4cc6d68abefaa2d83536bf3c036cc65d] <==
	I1018 15:04:04.761675       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 15:04:04.775078       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 15:04:04.775178       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 15:04:04.785067       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 15:04:04.785719       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ecc713f-94b4-44e1-9a32-99bd38e1b784", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-948537_d7c0e688-21c3-4659-9abd-1290d86c14a4 became leader
	I1018 15:04:04.785904       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-948537_d7c0e688-21c3-4659-9abd-1290d86c14a4!
	I1018 15:04:04.887051       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-948537_d7c0e688-21c3-4659-9abd-1290d86c14a4!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-948537 -n old-k8s-version-948537
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-948537 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-165275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-165275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (238.692695ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:05:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-165275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-165275 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-165275 describe deploy/metrics-server -n kube-system: exit status 1 (60.545488ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-165275 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-165275
helpers_test.go:243: (dbg) docker inspect no-preload-165275:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06",
	        "Created": "2025-10-18T15:04:14.174636016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 326844,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T15:04:14.20590254Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06/hostname",
	        "HostsPath": "/var/lib/docker/containers/aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06/hosts",
	        "LogPath": "/var/lib/docker/containers/aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06/aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06-json.log",
	        "Name": "/no-preload-165275",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-165275:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-165275",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06",
	                "LowerDir": "/var/lib/docker/overlay2/416402f5dd03bf1ed780ad68ce499c08cfc04a7c16ada619f14034f104f2e745-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/416402f5dd03bf1ed780ad68ce499c08cfc04a7c16ada619f14034f104f2e745/merged",
	                "UpperDir": "/var/lib/docker/overlay2/416402f5dd03bf1ed780ad68ce499c08cfc04a7c16ada619f14034f104f2e745/diff",
	                "WorkDir": "/var/lib/docker/overlay2/416402f5dd03bf1ed780ad68ce499c08cfc04a7c16ada619f14034f104f2e745/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-165275",
	                "Source": "/var/lib/docker/volumes/no-preload-165275/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-165275",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-165275",
	                "name.minikube.sigs.k8s.io": "no-preload-165275",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "be75315189e95d267ee6e1248ec4f0537c5b416d8f4997cac8855ecaf288d5da",
	            "SandboxKey": "/var/run/docker/netns/be75315189e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-165275": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:93:99:61:be:0c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2decf6b0e9a2edffe7ff29802fe30453af810cd2279b900d48c499fda7236039",
	                    "EndpointID": "b5dc7e304f203536e99a1b9cd103956700f03b8cdd025f960694daf9eb6b8f76",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-165275",
	                        "aa996275db3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-165275 -n no-preload-165275
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-165275 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-165275 logs -n 25: (1.023176041s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p force-systemd-flag-536692                                                                                                                                                                                                                  │ force-systemd-flag-536692 │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ start   │ -p force-systemd-env-680592 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-680592  │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ start   │ -p pause-552434 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-552434              │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ pause   │ -p pause-552434 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-552434              │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ stop    │ -p NoKubernetes-286873                                                                                                                                                                                                                        │ NoKubernetes-286873       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ start   │ -p NoKubernetes-286873 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-286873       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ delete  │ -p pause-552434                                                                                                                                                                                                                               │ pause-552434              │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ start   │ -p cert-expiration-327346 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-327346    │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:03 UTC │
	│ delete  │ -p force-systemd-env-680592                                                                                                                                                                                                                   │ force-systemd-env-680592  │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ ssh     │ -p NoKubernetes-286873 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-286873       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ delete  │ -p NoKubernetes-286873                                                                                                                                                                                                                        │ NoKubernetes-286873       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ start   │ -p cert-options-648086 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-648086       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:03 UTC │
	│ start   │ -p missing-upgrade-635158 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-635158    │ jenkins │ v1.32.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:03 UTC │
	│ ssh     │ cert-options-648086 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-648086       │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:03 UTC │
	│ ssh     │ -p cert-options-648086 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-648086       │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:03 UTC │
	│ delete  │ -p cert-options-648086                                                                                                                                                                                                                        │ cert-options-648086       │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:03 UTC │
	│ start   │ -p old-k8s-version-948537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:04 UTC │
	│ start   │ -p missing-upgrade-635158 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-635158    │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:04 UTC │
	│ delete  │ -p missing-upgrade-635158                                                                                                                                                                                                                     │ missing-upgrade-635158    │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ start   │ -p no-preload-165275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165275         │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-948537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │                     │
	│ stop    │ -p old-k8s-version-948537 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-948537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ start   │ -p old-k8s-version-948537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-165275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-165275         │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:04:37
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:04:37.252509  331720 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:04:37.252792  331720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:04:37.252804  331720 out.go:374] Setting ErrFile to fd 2...
	I1018 15:04:37.252808  331720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:04:37.253015  331720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:04:37.253465  331720 out.go:368] Setting JSON to false
	I1018 15:04:37.254771  331720 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10028,"bootTime":1760789849,"procs":437,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:04:37.254861  331720 start.go:141] virtualization: kvm guest
	I1018 15:04:37.256883  331720 out.go:179] * [old-k8s-version-948537] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:04:37.258347  331720 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:04:37.258348  331720 notify.go:220] Checking for updates...
	I1018 15:04:37.259725  331720 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:04:37.261121  331720 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:04:37.262428  331720 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 15:04:37.263510  331720 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:04:37.264776  331720 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:04:37.266658  331720 config.go:182] Loaded profile config "old-k8s-version-948537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 15:04:37.268558  331720 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1018 15:04:37.269735  331720 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:04:37.294243  331720 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:04:37.294377  331720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:04:37.352174  331720 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-10-18 15:04:37.34206938 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:04:37.352302  331720 docker.go:318] overlay module found
	I1018 15:04:37.354018  331720 out.go:179] * Using the docker driver based on existing profile
	I1018 15:04:37.355344  331720 start.go:305] selected driver: docker
	I1018 15:04:37.355361  331720 start.go:925] validating driver "docker" against &{Name:old-k8s-version-948537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-948537 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:04:37.355475  331720 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:04:37.356179  331720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:04:37.415830  331720 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-10-18 15:04:37.405870843 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:04:37.416185  331720 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:04:37.416216  331720 cni.go:84] Creating CNI manager for ""
	I1018 15:04:37.416265  331720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:04:37.416312  331720 start.go:349] cluster config:
	{Name:old-k8s-version-948537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-948537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:04:37.419163  331720 out.go:179] * Starting "old-k8s-version-948537" primary control-plane node in "old-k8s-version-948537" cluster
	I1018 15:04:37.420366  331720 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:04:37.421657  331720 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:04:37.422730  331720 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 15:04:37.422775  331720 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1018 15:04:37.422786  331720 cache.go:58] Caching tarball of preloaded images
	I1018 15:04:37.422835  331720 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 15:04:37.422907  331720 preload.go:233] Found /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 15:04:37.422965  331720 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1018 15:04:37.423105  331720 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/old-k8s-version-948537/config.json ...
	I1018 15:04:37.446561  331720 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:04:37.446584  331720 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 15:04:37.446600  331720 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:04:37.446632  331720 start.go:360] acquireMachinesLock for old-k8s-version-948537: {Name:mk09ee7802cfeacab96a479da1920c0d257c74ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:04:37.446702  331720 start.go:364] duration metric: took 45.288µs to acquireMachinesLock for "old-k8s-version-948537"
	I1018 15:04:37.446726  331720 start.go:96] Skipping create...Using existing machine configuration
	I1018 15:04:37.446736  331720 fix.go:54] fixHost starting: 
	I1018 15:04:37.447080  331720 cli_runner.go:164] Run: docker container inspect old-k8s-version-948537 --format={{.State.Status}}
	I1018 15:04:37.466958  331720 fix.go:112] recreateIfNeeded on old-k8s-version-948537: state=Stopped err=<nil>
	W1018 15:04:37.467001  331720 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 15:04:33.492956  326380 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/apiserver.crt.93e76921 ...
	I1018 15:04:33.492985  326380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/apiserver.crt.93e76921: {Name:mk73f3febac4cc1bd506e5c2376d2a7d480171b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:04:33.493202  326380 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/apiserver.key.93e76921 ...
	I1018 15:04:33.493221  326380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/apiserver.key.93e76921: {Name:mke5bcacac0846a69e71a8acf10578e32ebf691f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:04:33.493310  326380 certs.go:382] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/apiserver.crt.93e76921 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/apiserver.crt
	I1018 15:04:33.493383  326380 certs.go:386] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/apiserver.key.93e76921 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/apiserver.key
	I1018 15:04:33.493436  326380 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/proxy-client.key
	I1018 15:04:33.493451  326380 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/proxy-client.crt with IP's: []
	I1018 15:04:33.803733  326380 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/proxy-client.crt ...
	I1018 15:04:33.803762  326380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/proxy-client.crt: {Name:mk989904892f8125c88e2326ea9fe9fe0d244ef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:04:33.803988  326380 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/proxy-client.key ...
	I1018 15:04:33.804010  326380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/proxy-client.key: {Name:mka5cc39b3442191bf6e23768578eceb7d865926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:04:33.804260  326380 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem (1338 bytes)
	W1018 15:04:33.804299  326380 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187_empty.pem, impossibly tiny 0 bytes
	I1018 15:04:33.804311  326380 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 15:04:33.804332  326380 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem (1082 bytes)
	I1018 15:04:33.804356  326380 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem (1123 bytes)
	I1018 15:04:33.804378  326380 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem (1675 bytes)
	I1018 15:04:33.804422  326380 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:04:33.805002  326380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 15:04:33.823004  326380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 15:04:33.839741  326380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 15:04:33.856619  326380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 15:04:33.873954  326380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 15:04:33.891154  326380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 15:04:33.909377  326380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 15:04:33.926752  326380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 15:04:33.944151  326380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /usr/share/ca-certificates/931872.pem (1708 bytes)
	I1018 15:04:33.963106  326380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 15:04:33.980976  326380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem --> /usr/share/ca-certificates/93187.pem (1338 bytes)
	I1018 15:04:33.999153  326380 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 15:04:34.011685  326380 ssh_runner.go:195] Run: openssl version
	I1018 15:04:34.017855  326380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 15:04:34.026351  326380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:04:34.030168  326380 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:04:34.030215  326380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:04:34.063442  326380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 15:04:34.073205  326380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93187.pem && ln -fs /usr/share/ca-certificates/93187.pem /etc/ssl/certs/93187.pem"
	I1018 15:04:34.082130  326380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93187.pem
	I1018 15:04:34.086103  326380 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 14:26 /usr/share/ca-certificates/93187.pem
	I1018 15:04:34.086166  326380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93187.pem
	I1018 15:04:34.120541  326380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93187.pem /etc/ssl/certs/51391683.0"
	I1018 15:04:34.129513  326380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/931872.pem && ln -fs /usr/share/ca-certificates/931872.pem /etc/ssl/certs/931872.pem"
	I1018 15:04:34.138191  326380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/931872.pem
	I1018 15:04:34.141996  326380 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 14:26 /usr/share/ca-certificates/931872.pem
	I1018 15:04:34.142112  326380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/931872.pem
	I1018 15:04:34.177295  326380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/931872.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 15:04:34.186478  326380 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 15:04:34.190291  326380 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 15:04:34.190358  326380 kubeadm.go:400] StartCluster: {Name:no-preload-165275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-165275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:04:34.190430  326380 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 15:04:34.190469  326380 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 15:04:34.217611  326380 cri.go:89] found id: ""
	I1018 15:04:34.217682  326380 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 15:04:34.225904  326380 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 15:04:34.233885  326380 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 15:04:34.233954  326380 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 15:04:34.242087  326380 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 15:04:34.242104  326380 kubeadm.go:157] found existing configuration files:
	
	I1018 15:04:34.242142  326380 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 15:04:34.249770  326380 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 15:04:34.249826  326380 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 15:04:34.257023  326380 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 15:04:34.264392  326380 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 15:04:34.264445  326380 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 15:04:34.271742  326380 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 15:04:34.279149  326380 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 15:04:34.279189  326380 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 15:04:34.286149  326380 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 15:04:34.293324  326380 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 15:04:34.293376  326380 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 15:04:34.302074  326380 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 15:04:34.370346  326380 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 15:04:34.434448  326380 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 15:04:34.421830  278049 cri.go:89] found id: "daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:04:34.421852  278049 cri.go:89] found id: ""
	I1018 15:04:34.421862  278049 logs.go:282] 1 containers: [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d]
	I1018 15:04:34.421940  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:34.425873  278049 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 15:04:34.425952  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 15:04:34.456777  278049 cri.go:89] found id: ""
	I1018 15:04:34.456806  278049 logs.go:282] 0 containers: []
	W1018 15:04:34.456817  278049 logs.go:284] No container was found matching "etcd"
	I1018 15:04:34.456824  278049 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 15:04:34.456881  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 15:04:34.484162  278049 cri.go:89] found id: ""
	I1018 15:04:34.484188  278049 logs.go:282] 0 containers: []
	W1018 15:04:34.484198  278049 logs.go:284] No container was found matching "coredns"
	I1018 15:04:34.484205  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 15:04:34.484262  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 15:04:34.511534  278049 cri.go:89] found id: "3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:04:34.511559  278049 cri.go:89] found id: ""
	I1018 15:04:34.511569  278049 logs.go:282] 1 containers: [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011]
	I1018 15:04:34.511621  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:34.515719  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 15:04:34.515811  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 15:04:34.542142  278049 cri.go:89] found id: ""
	I1018 15:04:34.542169  278049 logs.go:282] 0 containers: []
	W1018 15:04:34.542179  278049 logs.go:284] No container was found matching "kube-proxy"
	I1018 15:04:34.542185  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 15:04:34.542244  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 15:04:34.570133  278049 cri.go:89] found id: "5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:04:34.570161  278049 cri.go:89] found id: "8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14"
	I1018 15:04:34.570166  278049 cri.go:89] found id: ""
	I1018 15:04:34.570174  278049 logs.go:282] 2 containers: [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a 8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14]
	I1018 15:04:34.570235  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:34.574590  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:34.578286  278049 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 15:04:34.578338  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 15:04:34.606396  278049 cri.go:89] found id: ""
	I1018 15:04:34.606421  278049 logs.go:282] 0 containers: []
	W1018 15:04:34.606430  278049 logs.go:284] No container was found matching "kindnet"
	I1018 15:04:34.606436  278049 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 15:04:34.606488  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 15:04:34.637616  278049 cri.go:89] found id: ""
	I1018 15:04:34.637643  278049 logs.go:282] 0 containers: []
	W1018 15:04:34.637652  278049 logs.go:284] No container was found matching "storage-provisioner"
	I1018 15:04:34.637670  278049 logs.go:123] Gathering logs for dmesg ...
	I1018 15:04:34.637686  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 15:04:34.652795  278049 logs.go:123] Gathering logs for describe nodes ...
	I1018 15:04:34.652825  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 15:04:34.708037  278049 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 15:04:34.708059  278049 logs.go:123] Gathering logs for kube-apiserver [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d] ...
	I1018 15:04:34.708077  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:04:34.739309  278049 logs.go:123] Gathering logs for kube-controller-manager [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a] ...
	I1018 15:04:34.739340  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:04:34.766977  278049 logs.go:123] Gathering logs for container status ...
	I1018 15:04:34.767008  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 15:04:34.796616  278049 logs.go:123] Gathering logs for kubelet ...
	I1018 15:04:34.796645  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 15:04:34.882674  278049 logs.go:123] Gathering logs for kube-scheduler [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011] ...
	I1018 15:04:34.882708  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:04:34.934352  278049 logs.go:123] Gathering logs for kube-controller-manager [8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14] ...
	I1018 15:04:34.934387  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14"
	I1018 15:04:34.960406  278049 logs.go:123] Gathering logs for CRI-O ...
	I1018 15:04:34.960436  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 15:04:37.509012  278049 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 15:04:37.509513  278049 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1018 15:04:37.509590  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 15:04:37.509651  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 15:04:37.539164  278049 cri.go:89] found id: "daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:04:37.539190  278049 cri.go:89] found id: ""
	I1018 15:04:37.539200  278049 logs.go:282] 1 containers: [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d]
	I1018 15:04:37.539261  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:37.543463  278049 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 15:04:37.543629  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 15:04:37.574549  278049 cri.go:89] found id: ""
	I1018 15:04:37.574579  278049 logs.go:282] 0 containers: []
	W1018 15:04:37.574590  278049 logs.go:284] No container was found matching "etcd"
	I1018 15:04:37.574597  278049 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 15:04:37.574659  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 15:04:37.614534  278049 cri.go:89] found id: ""
	I1018 15:04:37.614563  278049 logs.go:282] 0 containers: []
	W1018 15:04:37.614574  278049 logs.go:284] No container was found matching "coredns"
	I1018 15:04:37.614582  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 15:04:37.614644  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 15:04:37.653041  278049 cri.go:89] found id: "3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:04:37.653063  278049 cri.go:89] found id: ""
	I1018 15:04:37.653074  278049 logs.go:282] 1 containers: [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011]
	I1018 15:04:37.653134  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:37.657621  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 15:04:37.657698  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 15:04:37.688665  278049 cri.go:89] found id: ""
	I1018 15:04:37.688696  278049 logs.go:282] 0 containers: []
	W1018 15:04:37.688708  278049 logs.go:284] No container was found matching "kube-proxy"
	I1018 15:04:37.688716  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 15:04:37.688775  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 15:04:37.719680  278049 cri.go:89] found id: "5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:04:37.719704  278049 cri.go:89] found id: "8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14"
	I1018 15:04:37.719709  278049 cri.go:89] found id: ""
	I1018 15:04:37.719719  278049 logs.go:282] 2 containers: [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a 8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14]
	I1018 15:04:37.719778  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:37.724757  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:37.728673  278049 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 15:04:37.728725  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 15:04:37.761562  278049 cri.go:89] found id: ""
	I1018 15:04:37.761586  278049 logs.go:282] 0 containers: []
	W1018 15:04:37.761594  278049 logs.go:284] No container was found matching "kindnet"
	I1018 15:04:37.761603  278049 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 15:04:37.761652  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 15:04:37.796416  278049 cri.go:89] found id: ""
	I1018 15:04:37.796441  278049 logs.go:282] 0 containers: []
	W1018 15:04:37.796451  278049 logs.go:284] No container was found matching "storage-provisioner"
	I1018 15:04:37.796468  278049 logs.go:123] Gathering logs for kube-controller-manager [8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14] ...
	I1018 15:04:37.796485  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14"
	I1018 15:04:37.830002  278049 logs.go:123] Gathering logs for CRI-O ...
	I1018 15:04:37.830036  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 15:04:37.885361  278049 logs.go:123] Gathering logs for kubelet ...
	I1018 15:04:37.885465  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 15:04:37.992695  278049 logs.go:123] Gathering logs for describe nodes ...
	I1018 15:04:37.992735  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 15:04:38.067255  278049 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 15:04:38.067286  278049 logs.go:123] Gathering logs for container status ...
	I1018 15:04:38.067303  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 15:04:38.107361  278049 logs.go:123] Gathering logs for dmesg ...
	I1018 15:04:38.107392  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 15:04:38.123593  278049 logs.go:123] Gathering logs for kube-apiserver [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d] ...
	I1018 15:04:38.123634  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:04:38.156750  278049 logs.go:123] Gathering logs for kube-scheduler [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011] ...
	I1018 15:04:38.156780  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:04:38.207034  278049 logs.go:123] Gathering logs for kube-controller-manager [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a] ...
	I1018 15:04:38.207067  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:04:37.468902  331720 out.go:252] * Restarting existing docker container for "old-k8s-version-948537" ...
	I1018 15:04:37.469003  331720 cli_runner.go:164] Run: docker start old-k8s-version-948537
	I1018 15:04:37.752082  331720 cli_runner.go:164] Run: docker container inspect old-k8s-version-948537 --format={{.State.Status}}
	I1018 15:04:37.773173  331720 kic.go:430] container "old-k8s-version-948537" state is running.
	I1018 15:04:37.773648  331720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-948537
	I1018 15:04:37.794683  331720 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/old-k8s-version-948537/config.json ...
	I1018 15:04:37.795003  331720 machine.go:93] provisionDockerMachine start ...
	I1018 15:04:37.795100  331720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-948537
	I1018 15:04:37.816212  331720 main.go:141] libmachine: Using SSH client type: native
	I1018 15:04:37.816563  331720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1018 15:04:37.816586  331720 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:04:37.817230  331720 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42776->127.0.0.1:33063: read: connection reset by peer
	I1018 15:04:40.961456  331720 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-948537
	
	I1018 15:04:40.961485  331720 ubuntu.go:182] provisioning hostname "old-k8s-version-948537"
	I1018 15:04:40.961551  331720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-948537
	I1018 15:04:40.980527  331720 main.go:141] libmachine: Using SSH client type: native
	I1018 15:04:40.980848  331720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1018 15:04:40.980878  331720 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-948537 && echo "old-k8s-version-948537" | sudo tee /etc/hostname
	I1018 15:04:41.128213  331720 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-948537
	
	I1018 15:04:41.128311  331720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-948537
	I1018 15:04:41.148242  331720 main.go:141] libmachine: Using SSH client type: native
	I1018 15:04:41.148563  331720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1018 15:04:41.148590  331720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-948537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-948537/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-948537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:04:41.287329  331720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:04:41.287368  331720 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 15:04:41.287400  331720 ubuntu.go:190] setting up certificates
	I1018 15:04:41.287414  331720 provision.go:84] configureAuth start
	I1018 15:04:41.287484  331720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-948537
	I1018 15:04:41.306760  331720 provision.go:143] copyHostCerts
	I1018 15:04:41.306852  331720 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem, removing ...
	I1018 15:04:41.306874  331720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:04:41.306996  331720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 15:04:41.307132  331720 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem, removing ...
	I1018 15:04:41.307145  331720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:04:41.307188  331720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 15:04:41.307272  331720 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem, removing ...
	I1018 15:04:41.307283  331720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:04:41.307319  331720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 15:04:41.307391  331720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-948537 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-948537]
	I1018 15:04:41.387756  331720 provision.go:177] copyRemoteCerts
	I1018 15:04:41.387812  331720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:04:41.387863  331720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-948537
	I1018 15:04:41.405556  331720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/old-k8s-version-948537/id_rsa Username:docker}
	I1018 15:04:41.503288  331720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:04:41.521622  331720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 15:04:41.539096  331720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 15:04:41.557241  331720 provision.go:87] duration metric: took 269.808291ms to configureAuth
	I1018 15:04:41.557276  331720 ubuntu.go:206] setting minikube options for container-runtime
	I1018 15:04:41.557454  331720 config.go:182] Loaded profile config "old-k8s-version-948537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 15:04:41.557542  331720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-948537
	I1018 15:04:41.574965  331720 main.go:141] libmachine: Using SSH client type: native
	I1018 15:04:41.575218  331720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1018 15:04:41.575239  331720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:04:41.889123  331720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:04:41.889189  331720 machine.go:96] duration metric: took 4.094145794s to provisionDockerMachine
	I1018 15:04:41.889207  331720 start.go:293] postStartSetup for "old-k8s-version-948537" (driver="docker")
	I1018 15:04:41.889221  331720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:04:41.889441  331720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:04:41.889549  331720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-948537
	I1018 15:04:41.915487  331720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/old-k8s-version-948537/id_rsa Username:docker}
	I1018 15:04:42.019471  331720 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:04:42.023814  331720 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 15:04:42.023862  331720 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 15:04:42.023878  331720 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/addons for local assets ...
	I1018 15:04:42.023965  331720 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/files for local assets ...
	I1018 15:04:42.024047  331720 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem -> 931872.pem in /etc/ssl/certs
	I1018 15:04:42.024142  331720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:04:42.033638  331720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:04:42.054810  331720 start.go:296] duration metric: took 165.58444ms for postStartSetup
	I1018 15:04:42.054908  331720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 15:04:42.054993  331720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-948537
	I1018 15:04:42.076289  331720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/old-k8s-version-948537/id_rsa Username:docker}
	I1018 15:04:42.175255  331720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 15:04:42.180338  331720 fix.go:56] duration metric: took 4.733594509s for fixHost
	I1018 15:04:42.180366  331720 start.go:83] releasing machines lock for "old-k8s-version-948537", held for 4.733651142s
	I1018 15:04:42.180435  331720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-948537
	I1018 15:04:42.198118  331720 ssh_runner.go:195] Run: cat /version.json
	I1018 15:04:42.198141  331720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:04:42.198191  331720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-948537
	I1018 15:04:42.198208  331720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-948537
	I1018 15:04:42.219089  331720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/old-k8s-version-948537/id_rsa Username:docker}
	I1018 15:04:42.219653  331720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/old-k8s-version-948537/id_rsa Username:docker}
	I1018 15:04:42.369676  331720 ssh_runner.go:195] Run: systemctl --version
	I1018 15:04:42.376477  331720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:04:42.411603  331720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:04:42.416608  331720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:04:42.416666  331720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:04:42.426024  331720 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 15:04:42.426055  331720 start.go:495] detecting cgroup driver to use...
	I1018 15:04:42.426094  331720 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 15:04:42.426139  331720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:04:42.442749  331720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:04:42.455217  331720 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:04:42.455274  331720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:04:42.470072  331720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:04:42.483247  331720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:04:42.567349  331720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:04:42.654131  331720 docker.go:234] disabling docker service ...
	I1018 15:04:42.654241  331720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:04:42.675202  331720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:04:42.691955  331720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:04:42.791060  331720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:04:42.889319  331720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:04:42.909041  331720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:04:42.927431  331720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1018 15:04:42.927487  331720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:04:42.937393  331720 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 15:04:42.937456  331720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:04:42.946470  331720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:04:42.955821  331720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:04:42.965022  331720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:04:42.974270  331720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:04:42.983545  331720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:04:42.992165  331720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:04:43.001434  331720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:04:43.009189  331720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:04:43.018643  331720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:04:43.106624  331720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:04:43.217814  331720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:04:43.217888  331720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:04:43.222110  331720 start.go:563] Will wait 60s for crictl version
	I1018 15:04:43.222171  331720 ssh_runner.go:195] Run: which crictl
	I1018 15:04:43.226697  331720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 15:04:43.250801  331720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 15:04:43.250883  331720 ssh_runner.go:195] Run: crio --version
	I1018 15:04:43.278962  331720 ssh_runner.go:195] Run: crio --version
	I1018 15:04:43.309621  331720 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1018 15:04:43.629471  326380 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 15:04:43.629548  326380 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 15:04:43.629660  326380 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 15:04:43.629745  326380 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 15:04:43.629788  326380 kubeadm.go:318] OS: Linux
	I1018 15:04:43.629853  326380 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 15:04:43.629941  326380 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 15:04:43.630003  326380 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 15:04:43.630073  326380 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 15:04:43.630133  326380 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 15:04:43.630227  326380 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 15:04:43.630313  326380 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 15:04:43.630375  326380 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 15:04:43.630487  326380 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 15:04:43.630610  326380 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 15:04:43.630786  326380 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 15:04:43.630882  326380 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 15:04:43.632756  326380 out.go:252]   - Generating certificates and keys ...
	I1018 15:04:43.632861  326380 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 15:04:43.633019  326380 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 15:04:43.633200  326380 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 15:04:43.633303  326380 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 15:04:43.633392  326380 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 15:04:43.633466  326380 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 15:04:43.633542  326380 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 15:04:43.633695  326380 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-165275] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 15:04:43.633783  326380 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 15:04:43.633967  326380 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-165275] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 15:04:43.634052  326380 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 15:04:43.634135  326380 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 15:04:43.634212  326380 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 15:04:43.634298  326380 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 15:04:43.634374  326380 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 15:04:43.634454  326380 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 15:04:43.634532  326380 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 15:04:43.634642  326380 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 15:04:43.634731  326380 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 15:04:43.634807  326380 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 15:04:43.634869  326380 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 15:04:43.636556  326380 out.go:252]   - Booting up control plane ...
	I1018 15:04:43.636651  326380 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 15:04:43.636740  326380 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 15:04:43.636840  326380 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 15:04:43.637037  326380 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 15:04:43.637130  326380 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 15:04:43.637219  326380 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 15:04:43.637334  326380 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 15:04:43.637406  326380 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 15:04:43.637601  326380 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 15:04:43.637725  326380 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 15:04:43.637812  326380 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000897152s
	I1018 15:04:43.637951  326380 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 15:04:43.638083  326380 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 15:04:43.638200  326380 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 15:04:43.638319  326380 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 15:04:43.638431  326380 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.895734913s
	I1018 15:04:43.638525  326380 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.936778034s
	I1018 15:04:43.638620  326380 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.502105163s
	I1018 15:04:43.638760  326380 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 15:04:43.638899  326380 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 15:04:43.638985  326380 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 15:04:43.639248  326380 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-165275 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 15:04:43.639323  326380 kubeadm.go:318] [bootstrap-token] Using token: kvs1ng.l439vd0n23ieiiqs
	I1018 15:04:43.641611  326380 out.go:252]   - Configuring RBAC rules ...
	I1018 15:04:43.641727  326380 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 15:04:43.641820  326380 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 15:04:43.641999  326380 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 15:04:43.642114  326380 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 15:04:43.642213  326380 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 15:04:43.642281  326380 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 15:04:43.642370  326380 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 15:04:43.642406  326380 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 15:04:43.642443  326380 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 15:04:43.642449  326380 kubeadm.go:318] 
	I1018 15:04:43.642495  326380 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 15:04:43.642500  326380 kubeadm.go:318] 
	I1018 15:04:43.642563  326380 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 15:04:43.642570  326380 kubeadm.go:318] 
	I1018 15:04:43.642590  326380 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 15:04:43.642635  326380 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 15:04:43.642676  326380 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 15:04:43.642682  326380 kubeadm.go:318] 
	I1018 15:04:43.642766  326380 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 15:04:43.642783  326380 kubeadm.go:318] 
	I1018 15:04:43.642874  326380 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 15:04:43.642886  326380 kubeadm.go:318] 
	I1018 15:04:43.642976  326380 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 15:04:43.643044  326380 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 15:04:43.643111  326380 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 15:04:43.643117  326380 kubeadm.go:318] 
	I1018 15:04:43.643184  326380 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 15:04:43.643245  326380 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 15:04:43.643250  326380 kubeadm.go:318] 
	I1018 15:04:43.643314  326380 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token kvs1ng.l439vd0n23ieiiqs \
	I1018 15:04:43.643395  326380 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9845daa2461a34dd973607f8353b114051f1b03075ff9500a74fb21865e04329 \
	I1018 15:04:43.643417  326380 kubeadm.go:318] 	--control-plane 
	I1018 15:04:43.643423  326380 kubeadm.go:318] 
	I1018 15:04:43.643493  326380 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 15:04:43.643498  326380 kubeadm.go:318] 
	I1018 15:04:43.643564  326380 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token kvs1ng.l439vd0n23ieiiqs \
	I1018 15:04:43.643661  326380 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9845daa2461a34dd973607f8353b114051f1b03075ff9500a74fb21865e04329 
	I1018 15:04:43.643671  326380 cni.go:84] Creating CNI manager for ""
	I1018 15:04:43.643678  326380 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:04:43.645737  326380 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 15:04:40.734617  278049 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 15:04:40.735123  278049 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1018 15:04:40.735189  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 15:04:40.735255  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 15:04:40.770226  278049 cri.go:89] found id: "daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:04:40.770253  278049 cri.go:89] found id: ""
	I1018 15:04:40.770263  278049 logs.go:282] 1 containers: [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d]
	I1018 15:04:40.770327  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:40.775431  278049 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 15:04:40.775505  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 15:04:40.806504  278049 cri.go:89] found id: ""
	I1018 15:04:40.806535  278049 logs.go:282] 0 containers: []
	W1018 15:04:40.806546  278049 logs.go:284] No container was found matching "etcd"
	I1018 15:04:40.806553  278049 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 15:04:40.806615  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 15:04:40.839543  278049 cri.go:89] found id: ""
	I1018 15:04:40.839572  278049 logs.go:282] 0 containers: []
	W1018 15:04:40.839582  278049 logs.go:284] No container was found matching "coredns"
	I1018 15:04:40.839589  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 15:04:40.839653  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 15:04:40.871552  278049 cri.go:89] found id: "3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:04:40.871576  278049 cri.go:89] found id: ""
	I1018 15:04:40.871587  278049 logs.go:282] 1 containers: [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011]
	I1018 15:04:40.871652  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:40.876310  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 15:04:40.876378  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 15:04:40.903297  278049 cri.go:89] found id: ""
	I1018 15:04:40.903326  278049 logs.go:282] 0 containers: []
	W1018 15:04:40.903342  278049 logs.go:284] No container was found matching "kube-proxy"
	I1018 15:04:40.903352  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 15:04:40.903413  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 15:04:40.931696  278049 cri.go:89] found id: "5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:04:40.931723  278049 cri.go:89] found id: "8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14"
	I1018 15:04:40.931730  278049 cri.go:89] found id: ""
	I1018 15:04:40.931740  278049 logs.go:282] 2 containers: [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a 8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14]
	I1018 15:04:40.931802  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:40.935975  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:40.940300  278049 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 15:04:40.940361  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 15:04:40.968599  278049 cri.go:89] found id: ""
	I1018 15:04:40.968630  278049 logs.go:282] 0 containers: []
	W1018 15:04:40.968640  278049 logs.go:284] No container was found matching "kindnet"
	I1018 15:04:40.968648  278049 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 15:04:40.968712  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 15:04:40.997484  278049 cri.go:89] found id: ""
	I1018 15:04:40.997510  278049 logs.go:282] 0 containers: []
	W1018 15:04:40.997521  278049 logs.go:284] No container was found matching "storage-provisioner"
	I1018 15:04:40.997539  278049 logs.go:123] Gathering logs for kube-scheduler [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011] ...
	I1018 15:04:40.997554  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:04:41.050120  278049 logs.go:123] Gathering logs for kube-controller-manager [8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14] ...
	I1018 15:04:41.050151  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14"
	I1018 15:04:41.076870  278049 logs.go:123] Gathering logs for CRI-O ...
	I1018 15:04:41.076899  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 15:04:41.124052  278049 logs.go:123] Gathering logs for container status ...
	I1018 15:04:41.124094  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 15:04:41.157194  278049 logs.go:123] Gathering logs for describe nodes ...
	I1018 15:04:41.157224  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 15:04:41.214042  278049 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 15:04:41.214066  278049 logs.go:123] Gathering logs for kube-apiserver [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d] ...
	I1018 15:04:41.214083  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:04:41.247202  278049 logs.go:123] Gathering logs for kube-controller-manager [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a] ...
	I1018 15:04:41.247240  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:04:41.273741  278049 logs.go:123] Gathering logs for kubelet ...
	I1018 15:04:41.273768  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 15:04:41.378965  278049 logs.go:123] Gathering logs for dmesg ...
	I1018 15:04:41.379009  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 15:04:43.895985  278049 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 15:04:43.896427  278049 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1018 15:04:43.896489  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 15:04:43.896551  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 15:04:43.937097  278049 cri.go:89] found id: "daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:04:43.937127  278049 cri.go:89] found id: ""
	I1018 15:04:43.937139  278049 logs.go:282] 1 containers: [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d]
	I1018 15:04:43.937206  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:43.942825  278049 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 15:04:43.942899  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 15:04:43.980085  278049 cri.go:89] found id: ""
	I1018 15:04:43.980141  278049 logs.go:282] 0 containers: []
	W1018 15:04:43.980151  278049 logs.go:284] No container was found matching "etcd"
	I1018 15:04:43.980158  278049 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 15:04:43.980219  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 15:04:44.015274  278049 cri.go:89] found id: ""
	I1018 15:04:44.015310  278049 logs.go:282] 0 containers: []
	W1018 15:04:44.015324  278049 logs.go:284] No container was found matching "coredns"
	I1018 15:04:44.015332  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 15:04:44.015396  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 15:04:44.051619  278049 cri.go:89] found id: "3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:04:44.051643  278049 cri.go:89] found id: ""
	I1018 15:04:44.051656  278049 logs.go:282] 1 containers: [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011]
	I1018 15:04:44.051715  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:44.055692  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 15:04:44.055764  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 15:04:44.089596  278049 cri.go:89] found id: ""
	I1018 15:04:44.089626  278049 logs.go:282] 0 containers: []
	W1018 15:04:44.089638  278049 logs.go:284] No container was found matching "kube-proxy"
	I1018 15:04:44.089645  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 15:04:44.089702  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 15:04:44.118656  278049 cri.go:89] found id: "5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:04:44.118680  278049 cri.go:89] found id: "8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14"
	I1018 15:04:44.118685  278049 cri.go:89] found id: ""
	I1018 15:04:44.118695  278049 logs.go:282] 2 containers: [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a 8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14]
	I1018 15:04:44.118757  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:44.123318  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:44.127194  278049 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 15:04:44.127268  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 15:04:44.158152  278049 cri.go:89] found id: ""
	I1018 15:04:44.158188  278049 logs.go:282] 0 containers: []
	W1018 15:04:44.158199  278049 logs.go:284] No container was found matching "kindnet"
	I1018 15:04:44.158208  278049 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 15:04:44.158267  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 15:04:44.192940  278049 cri.go:89] found id: ""
	I1018 15:04:44.193024  278049 logs.go:282] 0 containers: []
	W1018 15:04:44.193041  278049 logs.go:284] No container was found matching "storage-provisioner"
	I1018 15:04:44.193061  278049 logs.go:123] Gathering logs for kubelet ...
	I1018 15:04:44.193082  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 15:04:44.316461  278049 logs.go:123] Gathering logs for dmesg ...
	I1018 15:04:44.316508  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 15:04:44.342434  278049 logs.go:123] Gathering logs for describe nodes ...
	I1018 15:04:44.342466  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1018 15:04:43.310862  331720 cli_runner.go:164] Run: docker network inspect old-k8s-version-948537 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:04:43.328986  331720 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 15:04:43.333144  331720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:04:43.344857  331720 kubeadm.go:883] updating cluster {Name:old-k8s-version-948537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-948537 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 15:04:43.344996  331720 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 15:04:43.345058  331720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:04:43.377907  331720 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:04:43.377946  331720 crio.go:433] Images already preloaded, skipping extraction
	I1018 15:04:43.378003  331720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:04:43.404218  331720 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:04:43.404246  331720 cache_images.go:85] Images are preloaded, skipping loading
	I1018 15:04:43.404256  331720 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.28.0 crio true true} ...
	I1018 15:04:43.404379  331720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-948537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-948537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 15:04:43.404461  331720 ssh_runner.go:195] Run: crio config
	I1018 15:04:43.454358  331720 cni.go:84] Creating CNI manager for ""
	I1018 15:04:43.454380  331720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:04:43.454394  331720 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 15:04:43.454417  331720 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-948537 NodeName:old-k8s-version-948537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 15:04:43.454565  331720 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-948537"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 15:04:43.454626  331720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1018 15:04:43.462822  331720 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 15:04:43.462895  331720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 15:04:43.470993  331720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1018 15:04:43.483670  331720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 15:04:43.496279  331720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1018 15:04:43.509137  331720 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 15:04:43.513268  331720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:04:43.524776  331720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:04:43.607102  331720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:04:43.630906  331720 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/old-k8s-version-948537 for IP: 192.168.103.2
	I1018 15:04:43.630958  331720 certs.go:195] generating shared ca certs ...
	I1018 15:04:43.630980  331720 certs.go:227] acquiring lock for ca certs: {Name:mk6d3b4e44856f498157352ffe8bb89d2c7a3998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:04:43.631143  331720 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key
	I1018 15:04:43.631182  331720 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key
	I1018 15:04:43.631191  331720 certs.go:257] generating profile certs ...
	I1018 15:04:43.631276  331720 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/old-k8s-version-948537/client.key
	I1018 15:04:43.631358  331720 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/old-k8s-version-948537/apiserver.key.44e86cc4
	I1018 15:04:43.631395  331720 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/old-k8s-version-948537/proxy-client.key
	I1018 15:04:43.631496  331720 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem (1338 bytes)
	W1018 15:04:43.631526  331720 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187_empty.pem, impossibly tiny 0 bytes
	I1018 15:04:43.631542  331720 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 15:04:43.631577  331720 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem (1082 bytes)
	I1018 15:04:43.631603  331720 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem (1123 bytes)
	I1018 15:04:43.631627  331720 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem (1675 bytes)
	I1018 15:04:43.631683  331720 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:04:43.632453  331720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 15:04:43.653165  331720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 15:04:43.674425  331720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 15:04:43.695288  331720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 15:04:43.717258  331720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/old-k8s-version-948537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 15:04:43.743120  331720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/old-k8s-version-948537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 15:04:43.763822  331720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/old-k8s-version-948537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 15:04:43.785521  331720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/old-k8s-version-948537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 15:04:43.805096  331720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 15:04:43.823346  331720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem --> /usr/share/ca-certificates/93187.pem (1338 bytes)
	I1018 15:04:43.843537  331720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /usr/share/ca-certificates/931872.pem (1708 bytes)
	I1018 15:04:43.863284  331720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 15:04:43.878953  331720 ssh_runner.go:195] Run: openssl version
	I1018 15:04:43.886513  331720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93187.pem && ln -fs /usr/share/ca-certificates/93187.pem /etc/ssl/certs/93187.pem"
	I1018 15:04:43.897809  331720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93187.pem
	I1018 15:04:43.902382  331720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 14:26 /usr/share/ca-certificates/93187.pem
	I1018 15:04:43.902434  331720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93187.pem
	I1018 15:04:43.953541  331720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93187.pem /etc/ssl/certs/51391683.0"
	I1018 15:04:43.964759  331720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/931872.pem && ln -fs /usr/share/ca-certificates/931872.pem /etc/ssl/certs/931872.pem"
	I1018 15:04:43.977087  331720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/931872.pem
	I1018 15:04:43.983310  331720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 14:26 /usr/share/ca-certificates/931872.pem
	I1018 15:04:43.983376  331720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/931872.pem
	I1018 15:04:44.039388  331720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/931872.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 15:04:44.049211  331720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 15:04:44.059208  331720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:04:44.064371  331720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:04:44.064426  331720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:04:44.105061  331720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 15:04:44.114625  331720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 15:04:44.119028  331720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 15:04:44.168887  331720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 15:04:44.219578  331720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 15:04:44.276641  331720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 15:04:44.336122  331720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 15:04:44.396095  331720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 15:04:44.442746  331720 kubeadm.go:400] StartCluster: {Name:old-k8s-version-948537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-948537 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:04:44.442877  331720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 15:04:44.442959  331720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 15:04:44.482258  331720 cri.go:89] found id: "66072254c9bf69ad4fa0d45670ab4ee9fbc8ac23b9081209ca73e1a08513bb77"
	I1018 15:04:44.482284  331720 cri.go:89] found id: "44dad120630eb2d0733b71694fa13433f00c53f74453d3fb34d10d2c5e2c1174"
	I1018 15:04:44.482290  331720 cri.go:89] found id: "c6c9f1798915d53f9ebc8eea360ea84ac0d228a2a817fa4a501701022703284a"
	I1018 15:04:44.482295  331720 cri.go:89] found id: "851f6b38dcd85d53e129d77afb0ca322c1c82f4dcc331a5606dc1cbaa443e3f6"
	I1018 15:04:44.482299  331720 cri.go:89] found id: ""
	I1018 15:04:44.482347  331720 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 15:04:44.499114  331720 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:04:44Z" level=error msg="open /run/runc: no such file or directory"
	I1018 15:04:44.499213  331720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 15:04:44.511350  331720 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 15:04:44.511371  331720 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 15:04:44.511421  331720 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 15:04:44.521197  331720 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 15:04:44.522167  331720 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-948537" does not appear in /home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:04:44.522819  331720 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-89690/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-948537" cluster setting kubeconfig missing "old-k8s-version-948537" context setting]
	I1018 15:04:44.524516  331720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/kubeconfig: {Name:mkdc6fbc9be8e57447490b30dd5bb0086611876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:04:44.526634  331720 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 15:04:44.538176  331720 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1018 15:04:44.538218  331720 kubeadm.go:601] duration metric: took 26.840118ms to restartPrimaryControlPlane
	I1018 15:04:44.538230  331720 kubeadm.go:402] duration metric: took 95.494704ms to StartCluster
	I1018 15:04:44.538250  331720 settings.go:142] acquiring lock: {Name:mk9d9d72167811b3554435e137ed17137bf54a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:04:44.538455  331720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:04:44.539879  331720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/kubeconfig: {Name:mkdc6fbc9be8e57447490b30dd5bb0086611876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:04:44.540165  331720 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:04:44.540238  331720 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 15:04:44.540365  331720 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-948537"
	I1018 15:04:44.540386  331720 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-948537"
	W1018 15:04:44.540395  331720 addons.go:247] addon storage-provisioner should already be in state true
	I1018 15:04:44.540398  331720 addons.go:69] Setting dashboard=true in profile "old-k8s-version-948537"
	I1018 15:04:44.540427  331720 host.go:66] Checking if "old-k8s-version-948537" exists ...
	I1018 15:04:44.540420  331720 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-948537"
	I1018 15:04:44.540437  331720 addons.go:238] Setting addon dashboard=true in "old-k8s-version-948537"
	W1018 15:04:44.540450  331720 addons.go:247] addon dashboard should already be in state true
	I1018 15:04:44.540491  331720 host.go:66] Checking if "old-k8s-version-948537" exists ...
	I1018 15:04:44.540450  331720 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-948537"
	I1018 15:04:44.540843  331720 cli_runner.go:164] Run: docker container inspect old-k8s-version-948537 --format={{.State.Status}}
	I1018 15:04:44.541050  331720 cli_runner.go:164] Run: docker container inspect old-k8s-version-948537 --format={{.State.Status}}
	I1018 15:04:44.541140  331720 config.go:182] Loaded profile config "old-k8s-version-948537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 15:04:44.541443  331720 cli_runner.go:164] Run: docker container inspect old-k8s-version-948537 --format={{.State.Status}}
	I1018 15:04:44.544256  331720 out.go:179] * Verifying Kubernetes components...
	I1018 15:04:44.545494  331720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:04:44.577889  331720 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-948537"
	W1018 15:04:44.577932  331720 addons.go:247] addon default-storageclass should already be in state true
	I1018 15:04:44.577968  331720 host.go:66] Checking if "old-k8s-version-948537" exists ...
	I1018 15:04:44.578291  331720 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 15:04:44.578473  331720 cli_runner.go:164] Run: docker container inspect old-k8s-version-948537 --format={{.State.Status}}
	I1018 15:04:44.579547  331720 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 15:04:44.579567  331720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 15:04:44.579624  331720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-948537
	I1018 15:04:44.589275  331720 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 15:04:44.590602  331720 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 15:04:44.594848  331720 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 15:04:44.594887  331720 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 15:04:44.594972  331720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-948537
	I1018 15:04:44.616509  331720 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 15:04:44.616608  331720 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 15:04:44.616719  331720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-948537
	I1018 15:04:44.626775  331720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/old-k8s-version-948537/id_rsa Username:docker}
	I1018 15:04:44.634273  331720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/old-k8s-version-948537/id_rsa Username:docker}
	I1018 15:04:44.656831  331720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/old-k8s-version-948537/id_rsa Username:docker}
	I1018 15:04:44.744755  331720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:04:44.755754  331720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 15:04:44.765338  331720 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-948537" to be "Ready" ...
	I1018 15:04:44.766247  331720 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 15:04:44.766275  331720 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 15:04:44.778944  331720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 15:04:44.783990  331720 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 15:04:44.784024  331720 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 15:04:44.799641  331720 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 15:04:44.799674  331720 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 15:04:44.819308  331720 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 15:04:44.819333  331720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 15:04:44.838814  331720 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 15:04:44.838843  331720 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 15:04:44.856055  331720 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 15:04:44.856103  331720 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 15:04:44.872937  331720 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 15:04:44.872968  331720 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 15:04:44.887058  331720 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 15:04:44.887084  331720 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 15:04:44.900430  331720 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 15:04:44.900468  331720 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 15:04:44.914679  331720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 15:04:46.787679  331720 node_ready.go:49] node "old-k8s-version-948537" is "Ready"
	I1018 15:04:46.787723  331720 node_ready.go:38] duration metric: took 2.02234783s for node "old-k8s-version-948537" to be "Ready" ...
	I1018 15:04:46.787744  331720 api_server.go:52] waiting for apiserver process to appear ...
	I1018 15:04:46.787818  331720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:04:47.447739  331720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.691940229s)
	I1018 15:04:47.447820  331720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.668838295s)
	I1018 15:04:47.855437  331720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.94071232s)
	I1018 15:04:47.855502  331720 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.067651557s)
	I1018 15:04:47.855535  331720 api_server.go:72] duration metric: took 3.315335858s to wait for apiserver process to appear ...
	I1018 15:04:47.855577  331720 api_server.go:88] waiting for apiserver healthz status ...
	I1018 15:04:47.855601  331720 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 15:04:47.856947  331720 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-948537 addons enable metrics-server
	
	I1018 15:04:47.858321  331720 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1018 15:04:43.646881  326380 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 15:04:43.652349  326380 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 15:04:43.652372  326380 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 15:04:43.667228  326380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 15:04:43.900344  326380 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 15:04:43.900446  326380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-165275 minikube.k8s.io/updated_at=2025_10_18T15_04_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=no-preload-165275 minikube.k8s.io/primary=true
	I1018 15:04:43.900451  326380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:04:44.007232  326380 ops.go:34] apiserver oom_adj: -16
	I1018 15:04:44.007485  326380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:04:44.507956  326380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:04:45.007606  326380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:04:45.508167  326380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:04:46.008163  326380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:04:46.510061  326380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:04:47.007603  326380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:04:47.508155  326380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:04:48.007891  326380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:04:48.507891  326380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:04:48.575471  326380 kubeadm.go:1113] duration metric: took 4.675114486s to wait for elevateKubeSystemPrivileges
	I1018 15:04:48.575517  326380 kubeadm.go:402] duration metric: took 14.385163609s to StartCluster
	I1018 15:04:48.575545  326380 settings.go:142] acquiring lock: {Name:mk9d9d72167811b3554435e137ed17137bf54a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:04:48.575614  326380 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:04:48.577012  326380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/kubeconfig: {Name:mkdc6fbc9be8e57447490b30dd5bb0086611876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:04:48.577267  326380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 15:04:48.577292  326380 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 15:04:48.577268  326380 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:04:48.577373  326380 addons.go:69] Setting storage-provisioner=true in profile "no-preload-165275"
	I1018 15:04:48.577393  326380 addons.go:238] Setting addon storage-provisioner=true in "no-preload-165275"
	I1018 15:04:48.577474  326380 host.go:66] Checking if "no-preload-165275" exists ...
	I1018 15:04:48.577493  326380 config.go:182] Loaded profile config "no-preload-165275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:04:48.577395  326380 addons.go:69] Setting default-storageclass=true in profile "no-preload-165275"
	I1018 15:04:48.577565  326380 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-165275"
	I1018 15:04:48.577940  326380 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:04:48.578069  326380 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:04:48.578899  326380 out.go:179] * Verifying Kubernetes components...
	I1018 15:04:48.580378  326380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:04:48.600995  326380 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 15:04:48.602008  326380 addons.go:238] Setting addon default-storageclass=true in "no-preload-165275"
	I1018 15:04:48.602060  326380 host.go:66] Checking if "no-preload-165275" exists ...
	I1018 15:04:48.602265  326380 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 15:04:48.602287  326380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 15:04:48.602343  326380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:04:48.602547  326380 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:04:48.628079  326380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa Username:docker}
	I1018 15:04:48.633145  326380 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 15:04:48.633169  326380 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 15:04:48.633238  326380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:04:48.656991  326380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa Username:docker}
	I1018 15:04:48.674356  326380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 15:04:48.716314  326380 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:04:48.744043  326380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 15:04:48.766166  326380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 15:04:48.868870  326380 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 15:04:48.870586  326380 node_ready.go:35] waiting up to 6m0s for node "no-preload-165275" to be "Ready" ...
	I1018 15:04:49.072572  326380 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1018 15:04:44.421874  278049 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 15:04:44.421897  278049 logs.go:123] Gathering logs for kube-apiserver [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d] ...
	I1018 15:04:44.422029  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:04:44.465111  278049 logs.go:123] Gathering logs for kube-controller-manager [8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14] ...
	I1018 15:04:44.465152  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14"
	W1018 15:04:44.503547  278049 logs.go:130] failed kube-controller-manager [8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14": Process exited with status 1
	stdout:
	
	stderr:
	E1018 15:04:44.499777    7003 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14\": container with ID starting with 8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14 not found: ID does not exist" containerID="8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14"
	time="2025-10-18T15:04:44Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14\": container with ID starting with 8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1018 15:04:44.499777    7003 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14\": container with ID starting with 8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14 not found: ID does not exist" containerID="8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14"
	time="2025-10-18T15:04:44Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14\": container with ID starting with 8052e57d723acfee4f4b1f8c0f7fbed3516f1baa36e02c81adfb6a0bba60dd14 not found: ID does not exist"
	
	** /stderr **
	I1018 15:04:44.503570  278049 logs.go:123] Gathering logs for CRI-O ...
	I1018 15:04:44.503585  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 15:04:44.586434  278049 logs.go:123] Gathering logs for container status ...
	I1018 15:04:44.586482  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 15:04:44.646731  278049 logs.go:123] Gathering logs for kube-scheduler [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011] ...
	I1018 15:04:44.646835  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:04:44.727855  278049 logs.go:123] Gathering logs for kube-controller-manager [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a] ...
	I1018 15:04:44.727922  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:04:47.270991  278049 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 15:04:47.272883  278049 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1018 15:04:47.272992  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 15:04:47.273054  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 15:04:47.304961  278049 cri.go:89] found id: "daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:04:47.304993  278049 cri.go:89] found id: ""
	I1018 15:04:47.305015  278049 logs.go:282] 1 containers: [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d]
	I1018 15:04:47.305073  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:47.309651  278049 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 15:04:47.309725  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 15:04:47.343217  278049 cri.go:89] found id: ""
	I1018 15:04:47.343246  278049 logs.go:282] 0 containers: []
	W1018 15:04:47.343257  278049 logs.go:284] No container was found matching "etcd"
	I1018 15:04:47.343264  278049 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 15:04:47.343321  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 15:04:47.373333  278049 cri.go:89] found id: ""
	I1018 15:04:47.373364  278049 logs.go:282] 0 containers: []
	W1018 15:04:47.373376  278049 logs.go:284] No container was found matching "coredns"
	I1018 15:04:47.373385  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 15:04:47.373453  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 15:04:47.404535  278049 cri.go:89] found id: "3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:04:47.404565  278049 cri.go:89] found id: ""
	I1018 15:04:47.404576  278049 logs.go:282] 1 containers: [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011]
	I1018 15:04:47.404643  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:47.411695  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 15:04:47.411759  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 15:04:47.445266  278049 cri.go:89] found id: ""
	I1018 15:04:47.445295  278049 logs.go:282] 0 containers: []
	W1018 15:04:47.445303  278049 logs.go:284] No container was found matching "kube-proxy"
	I1018 15:04:47.445308  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 15:04:47.445371  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 15:04:47.476157  278049 cri.go:89] found id: "5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:04:47.476176  278049 cri.go:89] found id: ""
	I1018 15:04:47.476185  278049 logs.go:282] 1 containers: [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a]
	I1018 15:04:47.476236  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:47.480626  278049 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 15:04:47.480698  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 15:04:47.509243  278049 cri.go:89] found id: ""
	I1018 15:04:47.509268  278049 logs.go:282] 0 containers: []
	W1018 15:04:47.509279  278049 logs.go:284] No container was found matching "kindnet"
	I1018 15:04:47.509287  278049 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 15:04:47.509359  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 15:04:47.541336  278049 cri.go:89] found id: ""
	I1018 15:04:47.541362  278049 logs.go:282] 0 containers: []
	W1018 15:04:47.541370  278049 logs.go:284] No container was found matching "storage-provisioner"
	I1018 15:04:47.541379  278049 logs.go:123] Gathering logs for kube-scheduler [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011] ...
	I1018 15:04:47.541393  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:04:47.614206  278049 logs.go:123] Gathering logs for kube-controller-manager [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a] ...
	I1018 15:04:47.614258  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:04:47.655198  278049 logs.go:123] Gathering logs for CRI-O ...
	I1018 15:04:47.655226  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 15:04:47.717766  278049 logs.go:123] Gathering logs for container status ...
	I1018 15:04:47.717809  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 15:04:47.759491  278049 logs.go:123] Gathering logs for kubelet ...
	I1018 15:04:47.759528  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 15:04:47.884691  278049 logs.go:123] Gathering logs for dmesg ...
	I1018 15:04:47.884726  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 15:04:47.904067  278049 logs.go:123] Gathering logs for describe nodes ...
	I1018 15:04:47.904106  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 15:04:47.974498  278049 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 15:04:47.974524  278049 logs.go:123] Gathering logs for kube-apiserver [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d] ...
	I1018 15:04:47.974539  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:04:47.859653  331720 addons.go:514] duration metric: took 3.319424925s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1018 15:04:47.860738  331720 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1018 15:04:47.860785  331720 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1018 15:04:48.356081  331720 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 15:04:48.360647  331720 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1018 15:04:48.362305  331720 api_server.go:141] control plane version: v1.28.0
	I1018 15:04:48.362338  331720 api_server.go:131] duration metric: took 506.750992ms to wait for apiserver health ...
	I1018 15:04:48.362350  331720 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:04:48.366107  331720 system_pods.go:59] 8 kube-system pods found
	I1018 15:04:48.366144  331720 system_pods.go:61] "coredns-5dd5756b68-j8xvf" [a4cd643f-8ca1-45d8-90e5-e114506edbee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:04:48.366154  331720 system_pods.go:61] "etcd-old-k8s-version-948537" [a3eb816c-9bcf-4d8c-8e66-30e49c9aa1a2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 15:04:48.366164  331720 system_pods.go:61] "kindnet-xwd4j" [21ae3860-2d55-4c5c-8e1a-19ad2fe19dc6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 15:04:48.366177  331720 system_pods.go:61] "kube-apiserver-old-k8s-version-948537" [506cbe8f-14fe-4bc8-82e7-006b9fa34aa6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 15:04:48.366189  331720 system_pods.go:61] "kube-controller-manager-old-k8s-version-948537" [3455ebd6-6e53-490f-9241-12b3c2139c9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 15:04:48.366201  331720 system_pods.go:61] "kube-proxy-kwt74" [e0a3d7d2-09ef-478b-85b5-f07938fcc069] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 15:04:48.366208  331720 system_pods.go:61] "kube-scheduler-old-k8s-version-948537" [ea7b96d2-1470-405b-996d-4faacdd788d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 15:04:48.366213  331720 system_pods.go:61] "storage-provisioner" [309bbd8a-c9c8-4f67-b838-aa0c230f04c5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:04:48.366223  331720 system_pods.go:74] duration metric: took 3.865809ms to wait for pod list to return data ...
	I1018 15:04:48.366234  331720 default_sa.go:34] waiting for default service account to be created ...
	I1018 15:04:48.368239  331720 default_sa.go:45] found service account: "default"
	I1018 15:04:48.368256  331720 default_sa.go:55] duration metric: took 2.016505ms for default service account to be created ...
	I1018 15:04:48.368264  331720 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 15:04:48.371405  331720 system_pods.go:86] 8 kube-system pods found
	I1018 15:04:48.371437  331720 system_pods.go:89] "coredns-5dd5756b68-j8xvf" [a4cd643f-8ca1-45d8-90e5-e114506edbee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:04:48.371448  331720 system_pods.go:89] "etcd-old-k8s-version-948537" [a3eb816c-9bcf-4d8c-8e66-30e49c9aa1a2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 15:04:48.371458  331720 system_pods.go:89] "kindnet-xwd4j" [21ae3860-2d55-4c5c-8e1a-19ad2fe19dc6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 15:04:48.371470  331720 system_pods.go:89] "kube-apiserver-old-k8s-version-948537" [506cbe8f-14fe-4bc8-82e7-006b9fa34aa6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 15:04:48.371476  331720 system_pods.go:89] "kube-controller-manager-old-k8s-version-948537" [3455ebd6-6e53-490f-9241-12b3c2139c9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 15:04:48.371485  331720 system_pods.go:89] "kube-proxy-kwt74" [e0a3d7d2-09ef-478b-85b5-f07938fcc069] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 15:04:48.371490  331720 system_pods.go:89] "kube-scheduler-old-k8s-version-948537" [ea7b96d2-1470-405b-996d-4faacdd788d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 15:04:48.371500  331720 system_pods.go:89] "storage-provisioner" [309bbd8a-c9c8-4f67-b838-aa0c230f04c5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:04:48.371511  331720 system_pods.go:126] duration metric: took 3.240784ms to wait for k8s-apps to be running ...
	I1018 15:04:48.371521  331720 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 15:04:48.371561  331720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:04:48.385578  331720 system_svc.go:56] duration metric: took 14.046516ms WaitForService to wait for kubelet
	I1018 15:04:48.385609  331720 kubeadm.go:586] duration metric: took 3.845409655s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:04:48.385634  331720 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:04:48.388567  331720 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 15:04:48.388607  331720 node_conditions.go:123] node cpu capacity is 8
	I1018 15:04:48.388624  331720 node_conditions.go:105] duration metric: took 2.984131ms to run NodePressure ...
	I1018 15:04:48.388639  331720 start.go:241] waiting for startup goroutines ...
	I1018 15:04:48.388653  331720 start.go:246] waiting for cluster config update ...
	I1018 15:04:48.388668  331720 start.go:255] writing updated cluster config ...
	I1018 15:04:48.389007  331720 ssh_runner.go:195] Run: rm -f paused
	I1018 15:04:48.393550  331720 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:04:48.397667  331720 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-j8xvf" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 15:04:50.404497  331720 pod_ready.go:104] pod "coredns-5dd5756b68-j8xvf" is not "Ready", error: <nil>
	I1018 15:04:49.073992  326380 addons.go:514] duration metric: took 496.695644ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 15:04:49.373989  326380 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-165275" context rescaled to 1 replicas
	W1018 15:04:50.874055  326380 node_ready.go:57] node "no-preload-165275" has "Ready":"False" status (will retry)
	I1018 15:04:50.510172  278049 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 15:04:50.510616  278049 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1018 15:04:50.510673  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 15:04:50.510747  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 15:04:50.546295  278049 cri.go:89] found id: "daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:04:50.546323  278049 cri.go:89] found id: ""
	I1018 15:04:50.546334  278049 logs.go:282] 1 containers: [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d]
	I1018 15:04:50.546394  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:50.551856  278049 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 15:04:50.551957  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 15:04:50.583231  278049 cri.go:89] found id: ""
	I1018 15:04:50.583264  278049 logs.go:282] 0 containers: []
	W1018 15:04:50.583277  278049 logs.go:284] No container was found matching "etcd"
	I1018 15:04:50.583285  278049 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 15:04:50.583350  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 15:04:50.618865  278049 cri.go:89] found id: ""
	I1018 15:04:50.618900  278049 logs.go:282] 0 containers: []
	W1018 15:04:50.618934  278049 logs.go:284] No container was found matching "coredns"
	I1018 15:04:50.618944  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 15:04:50.619009  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 15:04:50.655205  278049 cri.go:89] found id: "3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:04:50.655230  278049 cri.go:89] found id: ""
	I1018 15:04:50.655240  278049 logs.go:282] 1 containers: [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011]
	I1018 15:04:50.655306  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:50.660523  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 15:04:50.660599  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 15:04:50.693500  278049 cri.go:89] found id: ""
	I1018 15:04:50.693530  278049 logs.go:282] 0 containers: []
	W1018 15:04:50.693540  278049 logs.go:284] No container was found matching "kube-proxy"
	I1018 15:04:50.693547  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 15:04:50.693610  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 15:04:50.727888  278049 cri.go:89] found id: "5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:04:50.727946  278049 cri.go:89] found id: ""
	I1018 15:04:50.727958  278049 logs.go:282] 1 containers: [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a]
	I1018 15:04:50.728031  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:50.733251  278049 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 15:04:50.733323  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 15:04:50.763194  278049 cri.go:89] found id: ""
	I1018 15:04:50.763221  278049 logs.go:282] 0 containers: []
	W1018 15:04:50.763229  278049 logs.go:284] No container was found matching "kindnet"
	I1018 15:04:50.763235  278049 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 15:04:50.763282  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 15:04:50.792714  278049 cri.go:89] found id: ""
	I1018 15:04:50.792746  278049 logs.go:282] 0 containers: []
	W1018 15:04:50.792757  278049 logs.go:284] No container was found matching "storage-provisioner"
	I1018 15:04:50.792769  278049 logs.go:123] Gathering logs for CRI-O ...
	I1018 15:04:50.792789  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 15:04:50.866142  278049 logs.go:123] Gathering logs for container status ...
	I1018 15:04:50.866194  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 15:04:50.908897  278049 logs.go:123] Gathering logs for kubelet ...
	I1018 15:04:50.908955  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 15:04:51.052813  278049 logs.go:123] Gathering logs for dmesg ...
	I1018 15:04:51.052863  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 15:04:51.074667  278049 logs.go:123] Gathering logs for describe nodes ...
	I1018 15:04:51.074708  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 15:04:51.142117  278049 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 15:04:51.142146  278049 logs.go:123] Gathering logs for kube-apiserver [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d] ...
	I1018 15:04:51.142176  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:04:51.177494  278049 logs.go:123] Gathering logs for kube-scheduler [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011] ...
	I1018 15:04:51.177532  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:04:51.233288  278049 logs.go:123] Gathering logs for kube-controller-manager [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a] ...
	I1018 15:04:51.233329  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:04:53.764236  278049 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 15:04:53.764704  278049 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1018 15:04:53.764759  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 15:04:53.764819  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 15:04:53.793204  278049 cri.go:89] found id: "daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:04:53.793228  278049 cri.go:89] found id: ""
	I1018 15:04:53.793237  278049 logs.go:282] 1 containers: [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d]
	I1018 15:04:53.793298  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:53.797606  278049 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 15:04:53.797678  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 15:04:53.825080  278049 cri.go:89] found id: ""
	I1018 15:04:53.825103  278049 logs.go:282] 0 containers: []
	W1018 15:04:53.825112  278049 logs.go:284] No container was found matching "etcd"
	I1018 15:04:53.825118  278049 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 15:04:53.825174  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 15:04:53.852797  278049 cri.go:89] found id: ""
	I1018 15:04:53.852830  278049 logs.go:282] 0 containers: []
	W1018 15:04:53.852841  278049 logs.go:284] No container was found matching "coredns"
	I1018 15:04:53.852856  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 15:04:53.852935  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 15:04:53.881646  278049 cri.go:89] found id: "3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:04:53.881672  278049 cri.go:89] found id: ""
	I1018 15:04:53.881683  278049 logs.go:282] 1 containers: [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011]
	I1018 15:04:53.881750  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:53.885791  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 15:04:53.885853  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 15:04:53.914636  278049 cri.go:89] found id: ""
	I1018 15:04:53.914670  278049 logs.go:282] 0 containers: []
	W1018 15:04:53.914682  278049 logs.go:284] No container was found matching "kube-proxy"
	I1018 15:04:53.914692  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 15:04:53.914746  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 15:04:53.942524  278049 cri.go:89] found id: "5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:04:53.942552  278049 cri.go:89] found id: ""
	I1018 15:04:53.942563  278049 logs.go:282] 1 containers: [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a]
	I1018 15:04:53.942625  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:53.946966  278049 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 15:04:53.947031  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 15:04:53.975649  278049 cri.go:89] found id: ""
	I1018 15:04:53.975676  278049 logs.go:282] 0 containers: []
	W1018 15:04:53.975687  278049 logs.go:284] No container was found matching "kindnet"
	I1018 15:04:53.975695  278049 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 15:04:53.975765  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 15:04:54.002441  278049 cri.go:89] found id: ""
	I1018 15:04:54.002470  278049 logs.go:282] 0 containers: []
	W1018 15:04:54.002500  278049 logs.go:284] No container was found matching "storage-provisioner"
	I1018 15:04:54.002514  278049 logs.go:123] Gathering logs for describe nodes ...
	I1018 15:04:54.002534  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 15:04:54.059062  278049 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 15:04:54.059091  278049 logs.go:123] Gathering logs for kube-apiserver [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d] ...
	I1018 15:04:54.059113  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:04:54.092944  278049 logs.go:123] Gathering logs for kube-scheduler [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011] ...
	I1018 15:04:54.092977  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:04:54.145328  278049 logs.go:123] Gathering logs for kube-controller-manager [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a] ...
	I1018 15:04:54.145367  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:04:54.177095  278049 logs.go:123] Gathering logs for CRI-O ...
	I1018 15:04:54.177127  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 15:04:54.226810  278049 logs.go:123] Gathering logs for container status ...
	I1018 15:04:54.226850  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 15:04:54.259407  278049 logs.go:123] Gathering logs for kubelet ...
	I1018 15:04:54.259436  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 15:04:54.350585  278049 logs.go:123] Gathering logs for dmesg ...
	I1018 15:04:54.350623  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1018 15:04:52.902949  331720 pod_ready.go:104] pod "coredns-5dd5756b68-j8xvf" is not "Ready", error: <nil>
	W1018 15:04:54.904010  331720 pod_ready.go:104] pod "coredns-5dd5756b68-j8xvf" is not "Ready", error: <nil>
	W1018 15:04:53.374373  326380 node_ready.go:57] node "no-preload-165275" has "Ready":"False" status (will retry)
	W1018 15:04:55.873989  326380 node_ready.go:57] node "no-preload-165275" has "Ready":"False" status (will retry)
	W1018 15:04:57.874408  326380 node_ready.go:57] node "no-preload-165275" has "Ready":"False" status (will retry)
	I1018 15:04:56.869974  278049 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 15:04:56.870453  278049 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1018 15:04:56.870508  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 15:04:56.870566  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 15:04:56.898579  278049 cri.go:89] found id: "daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:04:56.898603  278049 cri.go:89] found id: ""
	I1018 15:04:56.898611  278049 logs.go:282] 1 containers: [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d]
	I1018 15:04:56.898666  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:56.903493  278049 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 15:04:56.903555  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 15:04:56.932227  278049 cri.go:89] found id: ""
	I1018 15:04:56.932255  278049 logs.go:282] 0 containers: []
	W1018 15:04:56.932265  278049 logs.go:284] No container was found matching "etcd"
	I1018 15:04:56.932272  278049 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 15:04:56.932330  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 15:04:56.959991  278049 cri.go:89] found id: ""
	I1018 15:04:56.960019  278049 logs.go:282] 0 containers: []
	W1018 15:04:56.960028  278049 logs.go:284] No container was found matching "coredns"
	I1018 15:04:56.960035  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 15:04:56.960092  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 15:04:56.986875  278049 cri.go:89] found id: "3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:04:56.986897  278049 cri.go:89] found id: ""
	I1018 15:04:56.986906  278049 logs.go:282] 1 containers: [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011]
	I1018 15:04:56.986991  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:56.991239  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 15:04:56.991313  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 15:04:57.020298  278049 cri.go:89] found id: ""
	I1018 15:04:57.020322  278049 logs.go:282] 0 containers: []
	W1018 15:04:57.020330  278049 logs.go:284] No container was found matching "kube-proxy"
	I1018 15:04:57.020335  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 15:04:57.020381  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 15:04:57.047941  278049 cri.go:89] found id: "5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:04:57.047969  278049 cri.go:89] found id: ""
	I1018 15:04:57.047979  278049 logs.go:282] 1 containers: [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a]
	I1018 15:04:57.048029  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:04:57.052735  278049 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 15:04:57.052794  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 15:04:57.083820  278049 cri.go:89] found id: ""
	I1018 15:04:57.083845  278049 logs.go:282] 0 containers: []
	W1018 15:04:57.083853  278049 logs.go:284] No container was found matching "kindnet"
	I1018 15:04:57.083858  278049 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 15:04:57.083906  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 15:04:57.111030  278049 cri.go:89] found id: ""
	I1018 15:04:57.111063  278049 logs.go:282] 0 containers: []
	W1018 15:04:57.111077  278049 logs.go:284] No container was found matching "storage-provisioner"
	I1018 15:04:57.111087  278049 logs.go:123] Gathering logs for container status ...
	I1018 15:04:57.111102  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 15:04:57.143844  278049 logs.go:123] Gathering logs for kubelet ...
	I1018 15:04:57.143879  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 15:04:57.242254  278049 logs.go:123] Gathering logs for dmesg ...
	I1018 15:04:57.242289  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 15:04:57.258813  278049 logs.go:123] Gathering logs for describe nodes ...
	I1018 15:04:57.258841  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 15:04:57.315087  278049 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 15:04:57.315110  278049 logs.go:123] Gathering logs for kube-apiserver [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d] ...
	I1018 15:04:57.315124  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:04:57.346919  278049 logs.go:123] Gathering logs for kube-scheduler [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011] ...
	I1018 15:04:57.346953  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:04:57.403059  278049 logs.go:123] Gathering logs for kube-controller-manager [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a] ...
	I1018 15:04:57.403098  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:04:57.431892  278049 logs.go:123] Gathering logs for CRI-O ...
	I1018 15:04:57.431941  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1018 15:04:57.403585  331720 pod_ready.go:104] pod "coredns-5dd5756b68-j8xvf" is not "Ready", error: <nil>
	W1018 15:04:59.404750  331720 pod_ready.go:104] pod "coredns-5dd5756b68-j8xvf" is not "Ready", error: <nil>
	W1018 15:05:01.902990  331720 pod_ready.go:104] pod "coredns-5dd5756b68-j8xvf" is not "Ready", error: <nil>
	W1018 15:05:00.375771  326380 node_ready.go:57] node "no-preload-165275" has "Ready":"False" status (will retry)
	I1018 15:05:02.373613  326380 node_ready.go:49] node "no-preload-165275" is "Ready"
	I1018 15:05:02.373639  326380 node_ready.go:38] duration metric: took 13.502976887s for node "no-preload-165275" to be "Ready" ...
	I1018 15:05:02.373655  326380 api_server.go:52] waiting for apiserver process to appear ...
	I1018 15:05:02.373701  326380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:05:02.385552  326380 api_server.go:72] duration metric: took 13.808161774s to wait for apiserver process to appear ...
	I1018 15:05:02.385577  326380 api_server.go:88] waiting for apiserver healthz status ...
	I1018 15:05:02.385595  326380 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 15:05:02.390508  326380 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 15:05:02.391512  326380 api_server.go:141] control plane version: v1.34.1
	I1018 15:05:02.391540  326380 api_server.go:131] duration metric: took 5.954721ms to wait for apiserver health ...
	I1018 15:05:02.391548  326380 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:05:02.394529  326380 system_pods.go:59] 8 kube-system pods found
	I1018 15:05:02.394556  326380 system_pods.go:61] "coredns-66bc5c9577-cmgb8" [dd196175-055d-422b-9d50-4c2d27396003] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:05:02.394562  326380 system_pods.go:61] "etcd-no-preload-165275" [3ebea1f8-51c9-4883-8c2a-6c37418aa6a8] Running
	I1018 15:05:02.394569  326380 system_pods.go:61] "kindnet-8c5w4" [4a12831b-4de0-40a5-8d0d-c14ce5eb116f] Running
	I1018 15:05:02.394573  326380 system_pods.go:61] "kube-apiserver-no-preload-165275" [b2c0b9d9-ecc2-4b6f-b867-45f859eff1e6] Running
	I1018 15:05:02.394577  326380 system_pods.go:61] "kube-controller-manager-no-preload-165275" [794a7195-4d8e-4a3e-8076-775097ede465] Running
	I1018 15:05:02.394580  326380 system_pods.go:61] "kube-proxy-84fhl" [0a001757-fcdc-48f4-96b6-55e6b0a44e15] Running
	I1018 15:05:02.394583  326380 system_pods.go:61] "kube-scheduler-no-preload-165275" [a71593a7-0e57-4319-be95-a7dbf5fb4ff4] Running
	I1018 15:05:02.394589  326380 system_pods.go:61] "storage-provisioner" [c052552a-00af-4394-b24f-0c6fb821c17c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:05:02.394597  326380 system_pods.go:74] duration metric: took 3.044805ms to wait for pod list to return data ...
	I1018 15:05:02.394606  326380 default_sa.go:34] waiting for default service account to be created ...
	I1018 15:05:02.396703  326380 default_sa.go:45] found service account: "default"
	I1018 15:05:02.396724  326380 default_sa.go:55] duration metric: took 2.10968ms for default service account to be created ...
	I1018 15:05:02.396734  326380 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 15:05:02.399264  326380 system_pods.go:86] 8 kube-system pods found
	I1018 15:05:02.399290  326380 system_pods.go:89] "coredns-66bc5c9577-cmgb8" [dd196175-055d-422b-9d50-4c2d27396003] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:05:02.399296  326380 system_pods.go:89] "etcd-no-preload-165275" [3ebea1f8-51c9-4883-8c2a-6c37418aa6a8] Running
	I1018 15:05:02.399301  326380 system_pods.go:89] "kindnet-8c5w4" [4a12831b-4de0-40a5-8d0d-c14ce5eb116f] Running
	I1018 15:05:02.399305  326380 system_pods.go:89] "kube-apiserver-no-preload-165275" [b2c0b9d9-ecc2-4b6f-b867-45f859eff1e6] Running
	I1018 15:05:02.399308  326380 system_pods.go:89] "kube-controller-manager-no-preload-165275" [794a7195-4d8e-4a3e-8076-775097ede465] Running
	I1018 15:05:02.399311  326380 system_pods.go:89] "kube-proxy-84fhl" [0a001757-fcdc-48f4-96b6-55e6b0a44e15] Running
	I1018 15:05:02.399314  326380 system_pods.go:89] "kube-scheduler-no-preload-165275" [a71593a7-0e57-4319-be95-a7dbf5fb4ff4] Running
	I1018 15:05:02.399322  326380 system_pods.go:89] "storage-provisioner" [c052552a-00af-4394-b24f-0c6fb821c17c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:05:02.399359  326380 retry.go:31] will retry after 187.79228ms: missing components: kube-dns
	I1018 15:05:02.591100  326380 system_pods.go:86] 8 kube-system pods found
	I1018 15:05:02.591135  326380 system_pods.go:89] "coredns-66bc5c9577-cmgb8" [dd196175-055d-422b-9d50-4c2d27396003] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:05:02.591143  326380 system_pods.go:89] "etcd-no-preload-165275" [3ebea1f8-51c9-4883-8c2a-6c37418aa6a8] Running
	I1018 15:05:02.591150  326380 system_pods.go:89] "kindnet-8c5w4" [4a12831b-4de0-40a5-8d0d-c14ce5eb116f] Running
	I1018 15:05:02.591154  326380 system_pods.go:89] "kube-apiserver-no-preload-165275" [b2c0b9d9-ecc2-4b6f-b867-45f859eff1e6] Running
	I1018 15:05:02.591158  326380 system_pods.go:89] "kube-controller-manager-no-preload-165275" [794a7195-4d8e-4a3e-8076-775097ede465] Running
	I1018 15:05:02.591161  326380 system_pods.go:89] "kube-proxy-84fhl" [0a001757-fcdc-48f4-96b6-55e6b0a44e15] Running
	I1018 15:05:02.591164  326380 system_pods.go:89] "kube-scheduler-no-preload-165275" [a71593a7-0e57-4319-be95-a7dbf5fb4ff4] Running
	I1018 15:05:02.591168  326380 system_pods.go:89] "storage-provisioner" [c052552a-00af-4394-b24f-0c6fb821c17c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:05:02.591184  326380 retry.go:31] will retry after 379.445005ms: missing components: kube-dns
	I1018 15:05:02.975364  326380 system_pods.go:86] 8 kube-system pods found
	I1018 15:05:02.975408  326380 system_pods.go:89] "coredns-66bc5c9577-cmgb8" [dd196175-055d-422b-9d50-4c2d27396003] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:05:02.975419  326380 system_pods.go:89] "etcd-no-preload-165275" [3ebea1f8-51c9-4883-8c2a-6c37418aa6a8] Running
	I1018 15:05:02.975430  326380 system_pods.go:89] "kindnet-8c5w4" [4a12831b-4de0-40a5-8d0d-c14ce5eb116f] Running
	I1018 15:05:02.975438  326380 system_pods.go:89] "kube-apiserver-no-preload-165275" [b2c0b9d9-ecc2-4b6f-b867-45f859eff1e6] Running
	I1018 15:05:02.975455  326380 system_pods.go:89] "kube-controller-manager-no-preload-165275" [794a7195-4d8e-4a3e-8076-775097ede465] Running
	I1018 15:05:02.975461  326380 system_pods.go:89] "kube-proxy-84fhl" [0a001757-fcdc-48f4-96b6-55e6b0a44e15] Running
	I1018 15:05:02.975468  326380 system_pods.go:89] "kube-scheduler-no-preload-165275" [a71593a7-0e57-4319-be95-a7dbf5fb4ff4] Running
	I1018 15:05:02.975477  326380 system_pods.go:89] "storage-provisioner" [c052552a-00af-4394-b24f-0c6fb821c17c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:05:02.975497  326380 retry.go:31] will retry after 451.408394ms: missing components: kube-dns
	I1018 15:04:59.983209  278049 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 15:04:59.983593  278049 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1018 15:04:59.983658  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 15:04:59.983721  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 15:05:00.027119  278049 cri.go:89] found id: "daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:05:00.027147  278049 cri.go:89] found id: ""
	I1018 15:05:00.027158  278049 logs.go:282] 1 containers: [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d]
	I1018 15:05:00.027217  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:05:00.032378  278049 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 15:05:00.032448  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 15:05:00.071673  278049 cri.go:89] found id: ""
	I1018 15:05:00.071707  278049 logs.go:282] 0 containers: []
	W1018 15:05:00.071717  278049 logs.go:284] No container was found matching "etcd"
	I1018 15:05:00.071725  278049 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 15:05:00.071787  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 15:05:00.108248  278049 cri.go:89] found id: ""
	I1018 15:05:00.108276  278049 logs.go:282] 0 containers: []
	W1018 15:05:00.108286  278049 logs.go:284] No container was found matching "coredns"
	I1018 15:05:00.108294  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 15:05:00.108355  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 15:05:00.148359  278049 cri.go:89] found id: "3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:05:00.148385  278049 cri.go:89] found id: ""
	I1018 15:05:00.148395  278049 logs.go:282] 1 containers: [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011]
	I1018 15:05:00.148456  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:05:00.153879  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 15:05:00.153966  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 15:05:00.193422  278049 cri.go:89] found id: ""
	I1018 15:05:00.193449  278049 logs.go:282] 0 containers: []
	W1018 15:05:00.193459  278049 logs.go:284] No container was found matching "kube-proxy"
	I1018 15:05:00.193467  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 15:05:00.193528  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 15:05:00.231590  278049 cri.go:89] found id: "5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:05:00.231615  278049 cri.go:89] found id: ""
	I1018 15:05:00.231625  278049 logs.go:282] 1 containers: [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a]
	I1018 15:05:00.231684  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:05:00.236882  278049 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 15:05:00.236973  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 15:05:00.276553  278049 cri.go:89] found id: ""
	I1018 15:05:00.276594  278049 logs.go:282] 0 containers: []
	W1018 15:05:00.276606  278049 logs.go:284] No container was found matching "kindnet"
	I1018 15:05:00.276614  278049 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 15:05:00.276677  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 15:05:00.312650  278049 cri.go:89] found id: ""
	I1018 15:05:00.312681  278049 logs.go:282] 0 containers: []
	W1018 15:05:00.312693  278049 logs.go:284] No container was found matching "storage-provisioner"
	I1018 15:05:00.312705  278049 logs.go:123] Gathering logs for dmesg ...
	I1018 15:05:00.312721  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 15:05:00.334538  278049 logs.go:123] Gathering logs for describe nodes ...
	I1018 15:05:00.334581  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 15:05:00.419478  278049 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 15:05:00.419506  278049 logs.go:123] Gathering logs for kube-apiserver [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d] ...
	I1018 15:05:00.419524  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:05:00.464987  278049 logs.go:123] Gathering logs for kube-scheduler [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011] ...
	I1018 15:05:00.465036  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:05:00.541333  278049 logs.go:123] Gathering logs for kube-controller-manager [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a] ...
	I1018 15:05:00.541370  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:05:00.576176  278049 logs.go:123] Gathering logs for CRI-O ...
	I1018 15:05:00.576210  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 15:05:00.642118  278049 logs.go:123] Gathering logs for container status ...
	I1018 15:05:00.642157  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 15:05:00.681117  278049 logs.go:123] Gathering logs for kubelet ...
	I1018 15:05:00.681155  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 15:05:03.314996  278049 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 15:05:03.315463  278049 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1018 15:05:03.315529  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 15:05:03.315593  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 15:05:03.351081  278049 cri.go:89] found id: "daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:05:03.351106  278049 cri.go:89] found id: ""
	I1018 15:05:03.351118  278049 logs.go:282] 1 containers: [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d]
	I1018 15:05:03.351180  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:05:03.356393  278049 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 15:05:03.356461  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 15:05:03.390743  278049 cri.go:89] found id: ""
	I1018 15:05:03.390769  278049 logs.go:282] 0 containers: []
	W1018 15:05:03.390779  278049 logs.go:284] No container was found matching "etcd"
	I1018 15:05:03.390787  278049 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 15:05:03.390852  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 15:05:03.427103  278049 cri.go:89] found id: ""
	I1018 15:05:03.427133  278049 logs.go:282] 0 containers: []
	W1018 15:05:03.427144  278049 logs.go:284] No container was found matching "coredns"
	I1018 15:05:03.427152  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 15:05:03.427213  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 15:05:03.465414  278049 cri.go:89] found id: "3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:05:03.465436  278049 cri.go:89] found id: ""
	I1018 15:05:03.465446  278049 logs.go:282] 1 containers: [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011]
	I1018 15:05:03.465505  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:05:03.471230  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 15:05:03.471303  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 15:05:03.506405  278049 cri.go:89] found id: ""
	I1018 15:05:03.506434  278049 logs.go:282] 0 containers: []
	W1018 15:05:03.506444  278049 logs.go:284] No container was found matching "kube-proxy"
	I1018 15:05:03.506452  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 15:05:03.506513  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 15:05:03.541117  278049 cri.go:89] found id: "5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:05:03.541144  278049 cri.go:89] found id: ""
	I1018 15:05:03.541156  278049 logs.go:282] 1 containers: [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a]
	I1018 15:05:03.541225  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:05:03.546414  278049 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 15:05:03.546496  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 15:05:03.580739  278049 cri.go:89] found id: ""
	I1018 15:05:03.580766  278049 logs.go:282] 0 containers: []
	W1018 15:05:03.580777  278049 logs.go:284] No container was found matching "kindnet"
	I1018 15:05:03.580790  278049 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 15:05:03.580857  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 15:05:03.616245  278049 cri.go:89] found id: ""
	I1018 15:05:03.616275  278049 logs.go:282] 0 containers: []
	W1018 15:05:03.616287  278049 logs.go:284] No container was found matching "storage-provisioner"
	I1018 15:05:03.616298  278049 logs.go:123] Gathering logs for kube-controller-manager [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a] ...
	I1018 15:05:03.616314  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:05:03.652632  278049 logs.go:123] Gathering logs for CRI-O ...
	I1018 15:05:03.652671  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 15:05:03.725609  278049 logs.go:123] Gathering logs for container status ...
	I1018 15:05:03.725650  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 15:05:03.767744  278049 logs.go:123] Gathering logs for kubelet ...
	I1018 15:05:03.767781  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 15:05:03.899861  278049 logs.go:123] Gathering logs for dmesg ...
	I1018 15:05:03.899904  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 15:05:03.920545  278049 logs.go:123] Gathering logs for describe nodes ...
	I1018 15:05:03.920580  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 15:05:03.994724  278049 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 15:05:03.994751  278049 logs.go:123] Gathering logs for kube-apiserver [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d] ...
	I1018 15:05:03.994770  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:05:04.039352  278049 logs.go:123] Gathering logs for kube-scheduler [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011] ...
	I1018 15:05:04.039390  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:05:03.431882  326380 system_pods.go:86] 8 kube-system pods found
	I1018 15:05:03.431935  326380 system_pods.go:89] "coredns-66bc5c9577-cmgb8" [dd196175-055d-422b-9d50-4c2d27396003] Running
	I1018 15:05:03.431944  326380 system_pods.go:89] "etcd-no-preload-165275" [3ebea1f8-51c9-4883-8c2a-6c37418aa6a8] Running
	I1018 15:05:03.431950  326380 system_pods.go:89] "kindnet-8c5w4" [4a12831b-4de0-40a5-8d0d-c14ce5eb116f] Running
	I1018 15:05:03.431955  326380 system_pods.go:89] "kube-apiserver-no-preload-165275" [b2c0b9d9-ecc2-4b6f-b867-45f859eff1e6] Running
	I1018 15:05:03.431961  326380 system_pods.go:89] "kube-controller-manager-no-preload-165275" [794a7195-4d8e-4a3e-8076-775097ede465] Running
	I1018 15:05:03.431966  326380 system_pods.go:89] "kube-proxy-84fhl" [0a001757-fcdc-48f4-96b6-55e6b0a44e15] Running
	I1018 15:05:03.431971  326380 system_pods.go:89] "kube-scheduler-no-preload-165275" [a71593a7-0e57-4319-be95-a7dbf5fb4ff4] Running
	I1018 15:05:03.431975  326380 system_pods.go:89] "storage-provisioner" [c052552a-00af-4394-b24f-0c6fb821c17c] Running
	I1018 15:05:03.431987  326380 system_pods.go:126] duration metric: took 1.035245511s to wait for k8s-apps to be running ...
	I1018 15:05:03.432001  326380 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 15:05:03.432053  326380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:05:03.449417  326380 system_svc.go:56] duration metric: took 17.404382ms WaitForService to wait for kubelet
	I1018 15:05:03.449454  326380 kubeadm.go:586] duration metric: took 14.872066781s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:05:03.449478  326380 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:05:03.453201  326380 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 15:05:03.453233  326380 node_conditions.go:123] node cpu capacity is 8
	I1018 15:05:03.453249  326380 node_conditions.go:105] duration metric: took 3.765751ms to run NodePressure ...
	I1018 15:05:03.453266  326380 start.go:241] waiting for startup goroutines ...
	I1018 15:05:03.453276  326380 start.go:246] waiting for cluster config update ...
	I1018 15:05:03.453291  326380 start.go:255] writing updated cluster config ...
	I1018 15:05:03.453624  326380 ssh_runner.go:195] Run: rm -f paused
	I1018 15:05:03.458635  326380 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:05:03.463563  326380 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cmgb8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:05:03.470482  326380 pod_ready.go:94] pod "coredns-66bc5c9577-cmgb8" is "Ready"
	I1018 15:05:03.470513  326380 pod_ready.go:86] duration metric: took 6.922978ms for pod "coredns-66bc5c9577-cmgb8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:05:03.473978  326380 pod_ready.go:83] waiting for pod "etcd-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:05:03.479021  326380 pod_ready.go:94] pod "etcd-no-preload-165275" is "Ready"
	I1018 15:05:03.479048  326380 pod_ready.go:86] duration metric: took 5.045671ms for pod "etcd-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:05:03.481534  326380 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:05:03.486574  326380 pod_ready.go:94] pod "kube-apiserver-no-preload-165275" is "Ready"
	I1018 15:05:03.486655  326380 pod_ready.go:86] duration metric: took 5.092801ms for pod "kube-apiserver-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:05:03.489280  326380 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:05:03.863488  326380 pod_ready.go:94] pod "kube-controller-manager-no-preload-165275" is "Ready"
	I1018 15:05:03.863515  326380 pod_ready.go:86] duration metric: took 374.208662ms for pod "kube-controller-manager-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:05:04.064445  326380 pod_ready.go:83] waiting for pod "kube-proxy-84fhl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:05:04.463599  326380 pod_ready.go:94] pod "kube-proxy-84fhl" is "Ready"
	I1018 15:05:04.463631  326380 pod_ready.go:86] duration metric: took 399.158718ms for pod "kube-proxy-84fhl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:05:04.666180  326380 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:05:05.063624  326380 pod_ready.go:94] pod "kube-scheduler-no-preload-165275" is "Ready"
	I1018 15:05:05.063655  326380 pod_ready.go:86] duration metric: took 397.447834ms for pod "kube-scheduler-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:05:05.063671  326380 pod_ready.go:40] duration metric: took 1.605001083s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:05:05.126744  326380 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:05:05.156372  326380 out.go:179] * Done! kubectl is now configured to use "no-preload-165275" cluster and "default" namespace by default
	W1018 15:05:03.903964  331720 pod_ready.go:104] pod "coredns-5dd5756b68-j8xvf" is not "Ready", error: <nil>
	W1018 15:05:06.403023  331720 pod_ready.go:104] pod "coredns-5dd5756b68-j8xvf" is not "Ready", error: <nil>
	I1018 15:05:06.614286  278049 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 15:05:06.614669  278049 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1018 15:05:06.614723  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 15:05:06.614780  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 15:05:06.642958  278049 cri.go:89] found id: "daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:05:06.642979  278049 cri.go:89] found id: ""
	I1018 15:05:06.642987  278049 logs.go:282] 1 containers: [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d]
	I1018 15:05:06.643037  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:05:06.647133  278049 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 15:05:06.647199  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 15:05:06.674720  278049 cri.go:89] found id: ""
	I1018 15:05:06.674749  278049 logs.go:282] 0 containers: []
	W1018 15:05:06.674757  278049 logs.go:284] No container was found matching "etcd"
	I1018 15:05:06.674763  278049 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 15:05:06.674810  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 15:05:06.701268  278049 cri.go:89] found id: ""
	I1018 15:05:06.701302  278049 logs.go:282] 0 containers: []
	W1018 15:05:06.701313  278049 logs.go:284] No container was found matching "coredns"
	I1018 15:05:06.701321  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 15:05:06.701381  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 15:05:06.728194  278049 cri.go:89] found id: "3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:05:06.728218  278049 cri.go:89] found id: ""
	I1018 15:05:06.728227  278049 logs.go:282] 1 containers: [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011]
	I1018 15:05:06.728277  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:05:06.732277  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 15:05:06.732343  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 15:05:06.758656  278049 cri.go:89] found id: ""
	I1018 15:05:06.758700  278049 logs.go:282] 0 containers: []
	W1018 15:05:06.758710  278049 logs.go:284] No container was found matching "kube-proxy"
	I1018 15:05:06.758718  278049 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 15:05:06.758778  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 15:05:06.785302  278049 cri.go:89] found id: "5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:05:06.785323  278049 cri.go:89] found id: ""
	I1018 15:05:06.785330  278049 logs.go:282] 1 containers: [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a]
	I1018 15:05:06.785376  278049 ssh_runner.go:195] Run: which crictl
	I1018 15:05:06.789469  278049 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 15:05:06.789538  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 15:05:06.815857  278049 cri.go:89] found id: ""
	I1018 15:05:06.815891  278049 logs.go:282] 0 containers: []
	W1018 15:05:06.815900  278049 logs.go:284] No container was found matching "kindnet"
	I1018 15:05:06.815908  278049 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 15:05:06.815983  278049 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 15:05:06.843298  278049 cri.go:89] found id: ""
	I1018 15:05:06.843323  278049 logs.go:282] 0 containers: []
	W1018 15:05:06.843334  278049 logs.go:284] No container was found matching "storage-provisioner"
	I1018 15:05:06.843345  278049 logs.go:123] Gathering logs for container status ...
	I1018 15:05:06.843361  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 15:05:06.874800  278049 logs.go:123] Gathering logs for kubelet ...
	I1018 15:05:06.874832  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 15:05:06.967322  278049 logs.go:123] Gathering logs for dmesg ...
	I1018 15:05:06.967357  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 15:05:06.983336  278049 logs.go:123] Gathering logs for describe nodes ...
	I1018 15:05:06.983367  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 15:05:07.039969  278049 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 15:05:07.039990  278049 logs.go:123] Gathering logs for kube-apiserver [daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d] ...
	I1018 15:05:07.040003  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daa7ac6f68ee337967f3336fd3b2b853f9bc5eb60ab878eb0169f8d24beca89d"
	I1018 15:05:07.072655  278049 logs.go:123] Gathering logs for kube-scheduler [3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011] ...
	I1018 15:05:07.072689  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3b4da31ef621223de562823f0080772f60286d8ea6f9cedb66ea9919f3529011"
	I1018 15:05:07.129361  278049 logs.go:123] Gathering logs for kube-controller-manager [5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a] ...
	I1018 15:05:07.129403  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a3a3e32e51eafc653c243ba1d4e03d8bd4b23fa6672d08c0847cbb45a46631a"
	I1018 15:05:07.156878  278049 logs.go:123] Gathering logs for CRI-O ...
	I1018 15:05:07.156923  278049 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1018 15:05:08.403768  331720 pod_ready.go:104] pod "coredns-5dd5756b68-j8xvf" is not "Ready", error: <nil>
	W1018 15:05:10.903082  331720 pod_ready.go:104] pod "coredns-5dd5756b68-j8xvf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 15:05:02 no-preload-165275 crio[774]: time="2025-10-18T15:05:02.538204637Z" level=info msg="Starting container: 1812f6739a8a0dfcb86f4083ed337aaf608ac9a433516daafa68b1f8261bbbe0" id=54bca348-6f87-4fad-950e-74a878d4550a name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:05:02 no-preload-165275 crio[774]: time="2025-10-18T15:05:02.540071043Z" level=info msg="Started container" PID=2907 containerID=1812f6739a8a0dfcb86f4083ed337aaf608ac9a433516daafa68b1f8261bbbe0 description=kube-system/coredns-66bc5c9577-cmgb8/coredns id=54bca348-6f87-4fad-950e-74a878d4550a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b67c89c373fac7d44642dbd7919b75d51e5cd0e5854f0234ca88fc6389d280a5
	Oct 18 15:05:05 no-preload-165275 crio[774]: time="2025-10-18T15:05:05.73462621Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d7210879-71bb-41f7-b9ac-ebfc19116210 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:05:05 no-preload-165275 crio[774]: time="2025-10-18T15:05:05.734725503Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:05:05 no-preload-165275 crio[774]: time="2025-10-18T15:05:05.739880773Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:53300686154a7f377e03bf714bb7853e42de28791d54df8bd49d25da0f081ae3 UID:71470317-9d5b-4040-a765-b12127d06e8f NetNS:/var/run/netns/43062ce5-f14b-4f43-8995-6cbb9fe4707b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128d70}] Aliases:map[]}"
	Oct 18 15:05:05 no-preload-165275 crio[774]: time="2025-10-18T15:05:05.739926498Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 15:05:05 no-preload-165275 crio[774]: time="2025-10-18T15:05:05.750233522Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:53300686154a7f377e03bf714bb7853e42de28791d54df8bd49d25da0f081ae3 UID:71470317-9d5b-4040-a765-b12127d06e8f NetNS:/var/run/netns/43062ce5-f14b-4f43-8995-6cbb9fe4707b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128d70}] Aliases:map[]}"
	Oct 18 15:05:05 no-preload-165275 crio[774]: time="2025-10-18T15:05:05.750386413Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 15:05:05 no-preload-165275 crio[774]: time="2025-10-18T15:05:05.751201726Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 15:05:05 no-preload-165275 crio[774]: time="2025-10-18T15:05:05.752131106Z" level=info msg="Ran pod sandbox 53300686154a7f377e03bf714bb7853e42de28791d54df8bd49d25da0f081ae3 with infra container: default/busybox/POD" id=d7210879-71bb-41f7-b9ac-ebfc19116210 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:05:05 no-preload-165275 crio[774]: time="2025-10-18T15:05:05.753538673Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0ca393f3-2b03-4e94-a78f-5cb805d32381 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:05:05 no-preload-165275 crio[774]: time="2025-10-18T15:05:05.753680383Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0ca393f3-2b03-4e94-a78f-5cb805d32381 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:05:05 no-preload-165275 crio[774]: time="2025-10-18T15:05:05.753716316Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0ca393f3-2b03-4e94-a78f-5cb805d32381 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:05:05 no-preload-165275 crio[774]: time="2025-10-18T15:05:05.754323074Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=37c5d79e-63fd-4f41-92f0-0dce2e344cd3 name=/runtime.v1.ImageService/PullImage
	Oct 18 15:05:05 no-preload-165275 crio[774]: time="2025-10-18T15:05:05.758085107Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 15:05:07 no-preload-165275 crio[774]: time="2025-10-18T15:05:07.744208724Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=37c5d79e-63fd-4f41-92f0-0dce2e344cd3 name=/runtime.v1.ImageService/PullImage
	Oct 18 15:05:07 no-preload-165275 crio[774]: time="2025-10-18T15:05:07.74485258Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b31b4905-8817-4d5d-80e6-e1b666ba2e80 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:05:07 no-preload-165275 crio[774]: time="2025-10-18T15:05:07.746259944Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0df7c9a7-fee2-427b-abc5-fbad366b80eb name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:05:07 no-preload-165275 crio[774]: time="2025-10-18T15:05:07.749822385Z" level=info msg="Creating container: default/busybox/busybox" id=a0da429a-9c74-445d-b230-6229ee44894d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:05:07 no-preload-165275 crio[774]: time="2025-10-18T15:05:07.750786499Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:05:07 no-preload-165275 crio[774]: time="2025-10-18T15:05:07.754941723Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:05:07 no-preload-165275 crio[774]: time="2025-10-18T15:05:07.755375902Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:05:07 no-preload-165275 crio[774]: time="2025-10-18T15:05:07.780505997Z" level=info msg="Created container 1170011f283077c004f402ff0bcaeafecd4990445349c5dfbe72e314b2c22ec1: default/busybox/busybox" id=a0da429a-9c74-445d-b230-6229ee44894d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:05:07 no-preload-165275 crio[774]: time="2025-10-18T15:05:07.781149014Z" level=info msg="Starting container: 1170011f283077c004f402ff0bcaeafecd4990445349c5dfbe72e314b2c22ec1" id=381b1788-5439-4112-ae57-bda44180766f name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:05:07 no-preload-165275 crio[774]: time="2025-10-18T15:05:07.782880198Z" level=info msg="Started container" PID=2986 containerID=1170011f283077c004f402ff0bcaeafecd4990445349c5dfbe72e314b2c22ec1 description=default/busybox/busybox id=381b1788-5439-4112-ae57-bda44180766f name=/runtime.v1.RuntimeService/StartContainer sandboxID=53300686154a7f377e03bf714bb7853e42de28791d54df8bd49d25da0f081ae3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1170011f28307       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   53300686154a7       busybox                                     default
	1812f6739a8a0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   b67c89c373fac       coredns-66bc5c9577-cmgb8                    kube-system
	841d63873f1e4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   f56e318ff6d89       storage-provisioner                         kube-system
	939c1bde4b5d4       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   1a6f48699cd8b       kindnet-8c5w4                               kube-system
	6cbeef3e9aec9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   8fbc78fd7308d       kube-proxy-84fhl                            kube-system
	c021cc24e0d47       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   049cbefa28fd4       kube-scheduler-no-preload-165275            kube-system
	38febe9507751       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   e9e9424f65b4a       kube-controller-manager-no-preload-165275   kube-system
	fac80b6fddf3a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   f88e0640e5c55       kube-apiserver-no-preload-165275            kube-system
	286c5e8c3943d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   1ada4618250e3       etcd-no-preload-165275                      kube-system
	
	
	==> coredns [1812f6739a8a0dfcb86f4083ed337aaf608ac9a433516daafa68b1f8261bbbe0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50463 - 40379 "HINFO IN 1631445646558351247.3850014521594295081. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.145884863s
	
	
	==> describe nodes <==
	Name:               no-preload-165275
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-165275
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=no-preload-165275
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_04_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:04:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-165275
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:05:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:05:14 +0000   Sat, 18 Oct 2025 15:04:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:05:14 +0000   Sat, 18 Oct 2025 15:04:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:05:14 +0000   Sat, 18 Oct 2025 15:04:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 15:05:14 +0000   Sat, 18 Oct 2025 15:05:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-165275
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                6d727dff-cef3-4b2d-bb6c-d6d48f30b9ab
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-cmgb8                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-165275                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-8c5w4                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-165275             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-165275    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-84fhl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-165275             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node no-preload-165275 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node no-preload-165275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node no-preload-165275 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node no-preload-165275 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node no-preload-165275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node no-preload-165275 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node no-preload-165275 event: Registered Node no-preload-165275 in Controller
	  Normal  NodeReady                12s                kubelet          Node no-preload-165275 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [286c5e8c3943d38354ada1ad3028c0b1746a983c7a299fe7c092f93b19869383] <==
	{"level":"warn","ts":"2025-10-18T15:04:39.979967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:39.987903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:39.998142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.005789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.012061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.019940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.027939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.035134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.042389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.049727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.058065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.065609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.072450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.078888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.087541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.094886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.102597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.109270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.116172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.122648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.131051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.137377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.151281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.167676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:04:40.209869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34910","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:05:14 up  2:47,  0 user,  load average: 3.70, 2.78, 1.85
	Linux no-preload-165275 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [939c1bde4b5d419e75786681f41ab653b373076bc8b15002f09e712d0a84be8a] <==
	I1018 15:04:51.454432       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 15:04:51.454701       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 15:04:51.454841       1 main.go:148] setting mtu 1500 for CNI 
	I1018 15:04:51.454861       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 15:04:51.454884       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T15:04:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 15:04:51.750690       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 15:04:51.750714       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 15:04:51.750724       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 15:04:51.751410       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 15:04:52.050858       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 15:04:52.050891       1 metrics.go:72] Registering metrics
	I1018 15:04:52.051035       1 controller.go:711] "Syncing nftables rules"
	I1018 15:05:01.751160       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 15:05:01.751214       1 main.go:301] handling current node
	I1018 15:05:11.751560       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 15:05:11.751590       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fac80b6fddf3a1452074b1ab220fa9397f02228c039fa48f590604fda55c67fb] <==
	I1018 15:04:40.704473       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 15:04:40.704520       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1018 15:04:40.710223       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 15:04:40.716276       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 15:04:40.716430       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 15:04:40.724559       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:04:40.905857       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 15:04:41.606014       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 15:04:41.609555       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 15:04:41.609574       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:04:42.109394       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:04:42.145008       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:04:42.210607       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 15:04:42.218246       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1018 15:04:42.219502       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 15:04:42.224725       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 15:04:42.677640       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 15:04:43.032165       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 15:04:43.040395       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 15:04:43.047084       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 15:04:48.326826       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 15:04:48.679235       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:04:48.684826       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:04:48.777488       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1018 15:05:13.495822       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:47526: use of closed network connection
	
	
	==> kube-controller-manager [38febe95077519864f50395cdeac7693aaae90d5f6e043ec0cbb13ea1d488259] <==
	I1018 15:04:47.675511       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 15:04:47.675535       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 15:04:47.676575       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 15:04:47.678865       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 15:04:47.679631       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-165275" podCIDRs=["10.244.0.0/24"]
	I1018 15:04:47.683841       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 15:04:47.685056       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 15:04:47.692339       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 15:04:47.699636       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 15:04:47.723277       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 15:04:47.723336       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 15:04:47.723358       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 15:04:47.723367       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 15:04:47.723425       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 15:04:47.723466       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 15:04:47.723561       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-165275"
	I1018 15:04:47.723618       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 15:04:47.724361       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 15:04:47.725448       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 15:04:47.725648       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 15:04:47.729012       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 15:04:47.729943       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 15:04:47.730030       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1018 15:04:48.923210       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1018 15:05:02.725982       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6cbeef3e9aec98c7c8c5d4dd728c1dc613ec304ca2bcf29ca2dad9c8ee1ec752] <==
	I1018 15:04:49.209696       1 server_linux.go:53] "Using iptables proxy"
	I1018 15:04:49.270720       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 15:04:49.371423       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 15:04:49.371459       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 15:04:49.371553       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 15:04:49.391639       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 15:04:49.391706       1 server_linux.go:132] "Using iptables Proxier"
	I1018 15:04:49.397512       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 15:04:49.397982       1 server.go:527] "Version info" version="v1.34.1"
	I1018 15:04:49.398023       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:04:49.399460       1 config.go:200] "Starting service config controller"
	I1018 15:04:49.399498       1 config.go:106] "Starting endpoint slice config controller"
	I1018 15:04:49.399510       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 15:04:49.399505       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 15:04:49.399523       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 15:04:49.399493       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 15:04:49.399555       1 config.go:309] "Starting node config controller"
	I1018 15:04:49.399564       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 15:04:49.399574       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 15:04:49.499681       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 15:04:49.499670       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 15:04:49.499711       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c021cc24e0d4749f92b3c76ee392edcee7e2722b05fea5e252c69ae9bd29776e] <==
	E1018 15:04:40.652838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 15:04:40.652933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 15:04:40.652901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 15:04:40.652987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 15:04:40.653004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 15:04:40.653087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 15:04:40.653091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 15:04:40.653105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 15:04:40.653109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 15:04:40.653238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 15:04:40.653247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 15:04:40.653335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 15:04:41.458613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 15:04:41.595553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 15:04:41.645164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 15:04:41.674491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 15:04:41.741634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 15:04:41.766202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 15:04:41.785460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 15:04:41.845813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 15:04:41.882094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 15:04:41.909213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 15:04:41.935413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 15:04:41.943616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1018 15:04:44.850073       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 15:04:43 no-preload-165275 kubelet[2297]: I1018 15:04:43.957897    2297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-165275" podStartSLOduration=0.957877834 podStartE2EDuration="957.877834ms" podCreationTimestamp="2025-10-18 15:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:04:43.957844374 +0000 UTC m=+1.146955611" watchObservedRunningTime="2025-10-18 15:04:43.957877834 +0000 UTC m=+1.146989069"
	Oct 18 15:04:43 no-preload-165275 kubelet[2297]: I1018 15:04:43.982643    2297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-165275" podStartSLOduration=0.982619624 podStartE2EDuration="982.619624ms" podCreationTimestamp="2025-10-18 15:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:04:43.967455242 +0000 UTC m=+1.156566475" watchObservedRunningTime="2025-10-18 15:04:43.982619624 +0000 UTC m=+1.171730859"
	Oct 18 15:04:43 no-preload-165275 kubelet[2297]: I1018 15:04:43.993547    2297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-165275" podStartSLOduration=0.993525629 podStartE2EDuration="993.525629ms" podCreationTimestamp="2025-10-18 15:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:04:43.982966116 +0000 UTC m=+1.172077350" watchObservedRunningTime="2025-10-18 15:04:43.993525629 +0000 UTC m=+1.182636862"
	Oct 18 15:04:44 no-preload-165275 kubelet[2297]: I1018 15:04:44.004160    2297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-165275" podStartSLOduration=1.004138779 podStartE2EDuration="1.004138779s" podCreationTimestamp="2025-10-18 15:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:04:43.99375088 +0000 UTC m=+1.182862106" watchObservedRunningTime="2025-10-18 15:04:44.004138779 +0000 UTC m=+1.193250007"
	Oct 18 15:04:47 no-preload-165275 kubelet[2297]: I1018 15:04:47.745430    2297 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 15:04:47 no-preload-165275 kubelet[2297]: I1018 15:04:47.746608    2297 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 15:04:48 no-preload-165275 kubelet[2297]: I1018 15:04:48.837310    2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a12831b-4de0-40a5-8d0d-c14ce5eb116f-lib-modules\") pod \"kindnet-8c5w4\" (UID: \"4a12831b-4de0-40a5-8d0d-c14ce5eb116f\") " pod="kube-system/kindnet-8c5w4"
	Oct 18 15:04:48 no-preload-165275 kubelet[2297]: I1018 15:04:48.837369    2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdfw8\" (UniqueName: \"kubernetes.io/projected/4a12831b-4de0-40a5-8d0d-c14ce5eb116f-kube-api-access-fdfw8\") pod \"kindnet-8c5w4\" (UID: \"4a12831b-4de0-40a5-8d0d-c14ce5eb116f\") " pod="kube-system/kindnet-8c5w4"
	Oct 18 15:04:48 no-preload-165275 kubelet[2297]: I1018 15:04:48.837409    2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xv5k\" (UniqueName: \"kubernetes.io/projected/0a001757-fcdc-48f4-96b6-55e6b0a44e15-kube-api-access-9xv5k\") pod \"kube-proxy-84fhl\" (UID: \"0a001757-fcdc-48f4-96b6-55e6b0a44e15\") " pod="kube-system/kube-proxy-84fhl"
	Oct 18 15:04:48 no-preload-165275 kubelet[2297]: I1018 15:04:48.837434    2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4a12831b-4de0-40a5-8d0d-c14ce5eb116f-cni-cfg\") pod \"kindnet-8c5w4\" (UID: \"4a12831b-4de0-40a5-8d0d-c14ce5eb116f\") " pod="kube-system/kindnet-8c5w4"
	Oct 18 15:04:48 no-preload-165275 kubelet[2297]: I1018 15:04:48.837465    2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0a001757-fcdc-48f4-96b6-55e6b0a44e15-kube-proxy\") pod \"kube-proxy-84fhl\" (UID: \"0a001757-fcdc-48f4-96b6-55e6b0a44e15\") " pod="kube-system/kube-proxy-84fhl"
	Oct 18 15:04:48 no-preload-165275 kubelet[2297]: I1018 15:04:48.837491    2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a001757-fcdc-48f4-96b6-55e6b0a44e15-xtables-lock\") pod \"kube-proxy-84fhl\" (UID: \"0a001757-fcdc-48f4-96b6-55e6b0a44e15\") " pod="kube-system/kube-proxy-84fhl"
	Oct 18 15:04:48 no-preload-165275 kubelet[2297]: I1018 15:04:48.837510    2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a001757-fcdc-48f4-96b6-55e6b0a44e15-lib-modules\") pod \"kube-proxy-84fhl\" (UID: \"0a001757-fcdc-48f4-96b6-55e6b0a44e15\") " pod="kube-system/kube-proxy-84fhl"
	Oct 18 15:04:48 no-preload-165275 kubelet[2297]: I1018 15:04:48.837533    2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a12831b-4de0-40a5-8d0d-c14ce5eb116f-xtables-lock\") pod \"kindnet-8c5w4\" (UID: \"4a12831b-4de0-40a5-8d0d-c14ce5eb116f\") " pod="kube-system/kindnet-8c5w4"
	Oct 18 15:04:49 no-preload-165275 kubelet[2297]: I1018 15:04:49.947704    2297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-84fhl" podStartSLOduration=1.947683571 podStartE2EDuration="1.947683571s" podCreationTimestamp="2025-10-18 15:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:04:49.94756951 +0000 UTC m=+7.136680742" watchObservedRunningTime="2025-10-18 15:04:49.947683571 +0000 UTC m=+7.136794805"
	Oct 18 15:04:52 no-preload-165275 kubelet[2297]: I1018 15:04:52.211250    2297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-8c5w4" podStartSLOduration=2.090662579 podStartE2EDuration="4.211228457s" podCreationTimestamp="2025-10-18 15:04:48 +0000 UTC" firstStartedPulling="2025-10-18 15:04:49.120326252 +0000 UTC m=+6.309437465" lastFinishedPulling="2025-10-18 15:04:51.240892116 +0000 UTC m=+8.430003343" observedRunningTime="2025-10-18 15:04:51.955654773 +0000 UTC m=+9.144766010" watchObservedRunningTime="2025-10-18 15:04:52.211228457 +0000 UTC m=+9.400339691"
	Oct 18 15:05:02 no-preload-165275 kubelet[2297]: I1018 15:05:02.158224    2297 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 15:05:02 no-preload-165275 kubelet[2297]: I1018 15:05:02.233836    2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dqdr\" (UniqueName: \"kubernetes.io/projected/c052552a-00af-4394-b24f-0c6fb821c17c-kube-api-access-7dqdr\") pod \"storage-provisioner\" (UID: \"c052552a-00af-4394-b24f-0c6fb821c17c\") " pod="kube-system/storage-provisioner"
	Oct 18 15:05:02 no-preload-165275 kubelet[2297]: I1018 15:05:02.233893    2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd196175-055d-422b-9d50-4c2d27396003-config-volume\") pod \"coredns-66bc5c9577-cmgb8\" (UID: \"dd196175-055d-422b-9d50-4c2d27396003\") " pod="kube-system/coredns-66bc5c9577-cmgb8"
	Oct 18 15:05:02 no-preload-165275 kubelet[2297]: I1018 15:05:02.233946    2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7d9x\" (UniqueName: \"kubernetes.io/projected/dd196175-055d-422b-9d50-4c2d27396003-kube-api-access-x7d9x\") pod \"coredns-66bc5c9577-cmgb8\" (UID: \"dd196175-055d-422b-9d50-4c2d27396003\") " pod="kube-system/coredns-66bc5c9577-cmgb8"
	Oct 18 15:05:02 no-preload-165275 kubelet[2297]: I1018 15:05:02.233985    2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c052552a-00af-4394-b24f-0c6fb821c17c-tmp\") pod \"storage-provisioner\" (UID: \"c052552a-00af-4394-b24f-0c6fb821c17c\") " pod="kube-system/storage-provisioner"
	Oct 18 15:05:02 no-preload-165275 kubelet[2297]: I1018 15:05:02.985314    2297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cmgb8" podStartSLOduration=14.985292655 podStartE2EDuration="14.985292655s" podCreationTimestamp="2025-10-18 15:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:05:02.984948283 +0000 UTC m=+20.174059517" watchObservedRunningTime="2025-10-18 15:05:02.985292655 +0000 UTC m=+20.174403889"
	Oct 18 15:05:02 no-preload-165275 kubelet[2297]: I1018 15:05:02.996718    2297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.996698751 podStartE2EDuration="13.996698751s" podCreationTimestamp="2025-10-18 15:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:05:02.996555601 +0000 UTC m=+20.185666835" watchObservedRunningTime="2025-10-18 15:05:02.996698751 +0000 UTC m=+20.185809985"
	Oct 18 15:05:05 no-preload-165275 kubelet[2297]: I1018 15:05:05.554540    2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2btf\" (UniqueName: \"kubernetes.io/projected/71470317-9d5b-4040-a765-b12127d06e8f-kube-api-access-g2btf\") pod \"busybox\" (UID: \"71470317-9d5b-4040-a765-b12127d06e8f\") " pod="default/busybox"
	Oct 18 15:05:07 no-preload-165275 kubelet[2297]: I1018 15:05:07.995963    2297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.004153163 podStartE2EDuration="2.995943218s" podCreationTimestamp="2025-10-18 15:05:05 +0000 UTC" firstStartedPulling="2025-10-18 15:05:05.753934374 +0000 UTC m=+22.943045591" lastFinishedPulling="2025-10-18 15:05:07.745724419 +0000 UTC m=+24.934835646" observedRunningTime="2025-10-18 15:05:07.995596487 +0000 UTC m=+25.184707721" watchObservedRunningTime="2025-10-18 15:05:07.995943218 +0000 UTC m=+25.185054454"
	
	
	==> storage-provisioner [841d63873f1e4914d0104559eef7abca5aaa1557f9b883c8e170c314393a7c9b] <==
	I1018 15:05:02.542346       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 15:05:02.550217       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 15:05:02.550276       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 15:05:02.553890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:05:02.559807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 15:05:02.560013       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 15:05:02.560107       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"297bdaca-635d-490e-89a8-cdf06fe2f03a", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-165275_a9eb6d21-1d30-43c4-9161-a8178ad4c688 became leader
	I1018 15:05:02.560152       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-165275_a9eb6d21-1d30-43c4-9161-a8178ad4c688!
	W1018 15:05:02.562373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:05:02.565803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 15:05:02.660779       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-165275_a9eb6d21-1d30-43c4-9161-a8178ad4c688!
	W1018 15:05:04.569752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:05:04.577074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:05:06.580645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:05:06.584557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:05:08.587649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:05:08.592587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:05:10.595726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:05:10.603331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:05:12.606712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:05:12.611782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:05:14.617626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:05:14.622027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-165275 -n no-preload-165275
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-165275 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-948537 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-948537 --alsologtostderr -v=1: exit status 80 (1.821925187s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-948537 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 15:05:39.086678  343022 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:05:39.086943  343022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:05:39.086955  343022 out.go:374] Setting ErrFile to fd 2...
	I1018 15:05:39.086960  343022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:05:39.087143  343022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:05:39.087380  343022 out.go:368] Setting JSON to false
	I1018 15:05:39.087427  343022 mustload.go:65] Loading cluster: old-k8s-version-948537
	I1018 15:05:39.087749  343022 config.go:182] Loaded profile config "old-k8s-version-948537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 15:05:39.088203  343022 cli_runner.go:164] Run: docker container inspect old-k8s-version-948537 --format={{.State.Status}}
	I1018 15:05:39.107957  343022 host.go:66] Checking if "old-k8s-version-948537" exists ...
	I1018 15:05:39.108280  343022 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:05:39.173394  343022 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:88 SystemTime:2025-10-18 15:05:39.161437557 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:05:39.174031  343022 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-948537 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 15:05:39.176079  343022 out.go:179] * Pausing node old-k8s-version-948537 ... 
	I1018 15:05:39.177367  343022 host.go:66] Checking if "old-k8s-version-948537" exists ...
	I1018 15:05:39.177705  343022 ssh_runner.go:195] Run: systemctl --version
	I1018 15:05:39.177766  343022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-948537
	I1018 15:05:39.197664  343022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/old-k8s-version-948537/id_rsa Username:docker}
	I1018 15:05:39.300543  343022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:05:39.322441  343022 pause.go:52] kubelet running: true
	I1018 15:05:39.322525  343022 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:05:39.511269  343022 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:05:39.511400  343022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:05:39.597136  343022 cri.go:89] found id: "067057e99a71c35cef6be48c228170e8a97bf712bc8e81bb891f09faeeff93cf"
	I1018 15:05:39.597161  343022 cri.go:89] found id: "6b4b5c46eb7c020c11c44ffc6289452f21552a034d98560f814fd10cd937517d"
	I1018 15:05:39.597165  343022 cri.go:89] found id: "67ecafd74cf06e59fa294c1705e72d6c1eee8307b1739175eda1df37d8321210"
	I1018 15:05:39.597168  343022 cri.go:89] found id: "f6b23d7900af3b31399d5fe6ff8b1e0a4f89b0cb9d8e045f2c6bf85fc2a3c4da"
	I1018 15:05:39.597170  343022 cri.go:89] found id: "52b03114a7d11a70da29b03a2cdcf4e45d69beb3474365226e6d235c2df948ef"
	I1018 15:05:39.597176  343022 cri.go:89] found id: "66072254c9bf69ad4fa0d45670ab4ee9fbc8ac23b9081209ca73e1a08513bb77"
	I1018 15:05:39.597179  343022 cri.go:89] found id: "44dad120630eb2d0733b71694fa13433f00c53f74453d3fb34d10d2c5e2c1174"
	I1018 15:05:39.597181  343022 cri.go:89] found id: "c6c9f1798915d53f9ebc8eea360ea84ac0d228a2a817fa4a501701022703284a"
	I1018 15:05:39.597184  343022 cri.go:89] found id: "851f6b38dcd85d53e129d77afb0ca322c1c82f4dcc331a5606dc1cbaa443e3f6"
	I1018 15:05:39.597196  343022 cri.go:89] found id: "2d707f4e636f3c33af0939dcc558810386c9490c2aadb8081a1acd6065892e82"
	I1018 15:05:39.597199  343022 cri.go:89] found id: "ca59ac639c4af3d27021b467cc03eca4d72a3f9c7d8418fc024c78d9006549fe"
	I1018 15:05:39.597201  343022 cri.go:89] found id: ""
	I1018 15:05:39.597239  343022 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:05:39.609285  343022 retry.go:31] will retry after 309.188174ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:05:39Z" level=error msg="open /run/runc: no such file or directory"
	I1018 15:05:39.918742  343022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:05:39.932656  343022 pause.go:52] kubelet running: false
	I1018 15:05:39.932721  343022 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:05:40.127743  343022 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:05:40.127870  343022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:05:40.225863  343022 cri.go:89] found id: "067057e99a71c35cef6be48c228170e8a97bf712bc8e81bb891f09faeeff93cf"
	I1018 15:05:40.225886  343022 cri.go:89] found id: "6b4b5c46eb7c020c11c44ffc6289452f21552a034d98560f814fd10cd937517d"
	I1018 15:05:40.225893  343022 cri.go:89] found id: "67ecafd74cf06e59fa294c1705e72d6c1eee8307b1739175eda1df37d8321210"
	I1018 15:05:40.225898  343022 cri.go:89] found id: "f6b23d7900af3b31399d5fe6ff8b1e0a4f89b0cb9d8e045f2c6bf85fc2a3c4da"
	I1018 15:05:40.225909  343022 cri.go:89] found id: "52b03114a7d11a70da29b03a2cdcf4e45d69beb3474365226e6d235c2df948ef"
	I1018 15:05:40.225925  343022 cri.go:89] found id: "66072254c9bf69ad4fa0d45670ab4ee9fbc8ac23b9081209ca73e1a08513bb77"
	I1018 15:05:40.225929  343022 cri.go:89] found id: "44dad120630eb2d0733b71694fa13433f00c53f74453d3fb34d10d2c5e2c1174"
	I1018 15:05:40.225933  343022 cri.go:89] found id: "c6c9f1798915d53f9ebc8eea360ea84ac0d228a2a817fa4a501701022703284a"
	I1018 15:05:40.225938  343022 cri.go:89] found id: "851f6b38dcd85d53e129d77afb0ca322c1c82f4dcc331a5606dc1cbaa443e3f6"
	I1018 15:05:40.225946  343022 cri.go:89] found id: "2d707f4e636f3c33af0939dcc558810386c9490c2aadb8081a1acd6065892e82"
	I1018 15:05:40.225950  343022 cri.go:89] found id: "ca59ac639c4af3d27021b467cc03eca4d72a3f9c7d8418fc024c78d9006549fe"
	I1018 15:05:40.225954  343022 cri.go:89] found id: ""
	I1018 15:05:40.225994  343022 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:05:40.240876  343022 retry.go:31] will retry after 223.660135ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:05:40Z" level=error msg="open /run/runc: no such file or directory"
	I1018 15:05:40.465382  343022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:05:40.483327  343022 pause.go:52] kubelet running: false
	I1018 15:05:40.483548  343022 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:05:40.721618  343022 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:05:40.721726  343022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:05:40.821060  343022 cri.go:89] found id: "067057e99a71c35cef6be48c228170e8a97bf712bc8e81bb891f09faeeff93cf"
	I1018 15:05:40.821089  343022 cri.go:89] found id: "6b4b5c46eb7c020c11c44ffc6289452f21552a034d98560f814fd10cd937517d"
	I1018 15:05:40.821095  343022 cri.go:89] found id: "67ecafd74cf06e59fa294c1705e72d6c1eee8307b1739175eda1df37d8321210"
	I1018 15:05:40.821101  343022 cri.go:89] found id: "f6b23d7900af3b31399d5fe6ff8b1e0a4f89b0cb9d8e045f2c6bf85fc2a3c4da"
	I1018 15:05:40.821104  343022 cri.go:89] found id: "52b03114a7d11a70da29b03a2cdcf4e45d69beb3474365226e6d235c2df948ef"
	I1018 15:05:40.821109  343022 cri.go:89] found id: "66072254c9bf69ad4fa0d45670ab4ee9fbc8ac23b9081209ca73e1a08513bb77"
	I1018 15:05:40.821113  343022 cri.go:89] found id: "44dad120630eb2d0733b71694fa13433f00c53f74453d3fb34d10d2c5e2c1174"
	I1018 15:05:40.821117  343022 cri.go:89] found id: "c6c9f1798915d53f9ebc8eea360ea84ac0d228a2a817fa4a501701022703284a"
	I1018 15:05:40.821121  343022 cri.go:89] found id: "851f6b38dcd85d53e129d77afb0ca322c1c82f4dcc331a5606dc1cbaa443e3f6"
	I1018 15:05:40.821130  343022 cri.go:89] found id: "2d707f4e636f3c33af0939dcc558810386c9490c2aadb8081a1acd6065892e82"
	I1018 15:05:40.821134  343022 cri.go:89] found id: "ca59ac639c4af3d27021b467cc03eca4d72a3f9c7d8418fc024c78d9006549fe"
	I1018 15:05:40.821138  343022 cri.go:89] found id: ""
	I1018 15:05:40.821187  343022 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:05:40.839257  343022 out.go:203] 
	W1018 15:05:40.840488  343022 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:05:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:05:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 15:05:40.840508  343022 out.go:285] * 
	* 
	W1018 15:05:40.847999  343022 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 15:05:40.849552  343022 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-948537 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-948537
helpers_test.go:243: (dbg) docker inspect old-k8s-version-948537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7",
	        "Created": "2025-10-18T15:03:24.489578766Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 331956,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T15:04:37.494464413Z",
	            "FinishedAt": "2025-10-18T15:04:35.693961963Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7/hostname",
	        "HostsPath": "/var/lib/docker/containers/3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7/hosts",
	        "LogPath": "/var/lib/docker/containers/3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7/3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7-json.log",
	        "Name": "/old-k8s-version-948537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-948537:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-948537",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7",
	                "LowerDir": "/var/lib/docker/overlay2/4e9f5ec3d6a3c3a3e9655b5f49c6cec160d792c920c0ef7bc94b940d3800dbd6-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e9f5ec3d6a3c3a3e9655b5f49c6cec160d792c920c0ef7bc94b940d3800dbd6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e9f5ec3d6a3c3a3e9655b5f49c6cec160d792c920c0ef7bc94b940d3800dbd6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e9f5ec3d6a3c3a3e9655b5f49c6cec160d792c920c0ef7bc94b940d3800dbd6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-948537",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-948537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-948537",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-948537",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-948537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d59c22038a881d06227490cfb017258ab78e228b1ed96a50540d6ef6c22f3050",
	            "SandboxKey": "/var/run/docker/netns/d59c22038a88",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-948537": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:83:f7:70:c5:75",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "61ee9ee46471b491cbfab6422a4dbe2929bd7ab545265cf14dbd822e55ffe7f8",
	                    "EndpointID": "607967f700c84c7d6e0efa47e8698b7a12119bde6d74b6bae18612a1c9344ce8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-948537",
	                        "3730ae01e013"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-948537 -n old-k8s-version-948537
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-948537 -n old-k8s-version-948537: exit status 2 (409.991428ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-948537 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-948537 logs -n 25: (1.488539121s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p NoKubernetes-286873 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-286873       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ delete  │ -p NoKubernetes-286873                                                                                                                                                                                                                        │ NoKubernetes-286873       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ start   │ -p cert-options-648086 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-648086       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:03 UTC │
	│ start   │ -p missing-upgrade-635158 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-635158    │ jenkins │ v1.32.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:03 UTC │
	│ ssh     │ cert-options-648086 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-648086       │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:03 UTC │
	│ ssh     │ -p cert-options-648086 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-648086       │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:03 UTC │
	│ delete  │ -p cert-options-648086                                                                                                                                                                                                                        │ cert-options-648086       │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:03 UTC │
	│ start   │ -p old-k8s-version-948537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:04 UTC │
	│ start   │ -p missing-upgrade-635158 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-635158    │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:04 UTC │
	│ delete  │ -p missing-upgrade-635158                                                                                                                                                                                                                     │ missing-upgrade-635158    │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ start   │ -p no-preload-165275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165275         │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-948537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │                     │
	│ stop    │ -p old-k8s-version-948537 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-948537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ start   │ -p old-k8s-version-948537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-165275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-165275         │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ stop    │ -p no-preload-165275 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-165275         │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-833162 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-833162 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ delete  │ -p kubernetes-upgrade-833162                                                                                                                                                                                                                  │ kubernetes-upgrade-833162 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ addons  │ enable dashboard -p no-preload-165275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-165275         │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p embed-certs-775590 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-775590        │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ start   │ -p no-preload-165275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165275         │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ image   │ old-k8s-version-948537 image list --format=json                                                                                                                                                                                               │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ pause   │ -p old-k8s-version-948537 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:05:32
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:05:32.322781  340627 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:05:32.322923  340627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:05:32.322932  340627 out.go:374] Setting ErrFile to fd 2...
	I1018 15:05:32.322939  340627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:05:32.323149  340627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:05:32.323593  340627 out.go:368] Setting JSON to false
	I1018 15:05:32.325760  340627 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10083,"bootTime":1760789849,"procs":421,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:05:32.325892  340627 start.go:141] virtualization: kvm guest
	I1018 15:05:32.328173  340627 out.go:179] * [no-preload-165275] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:05:32.330445  340627 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:05:32.330449  340627 notify.go:220] Checking for updates...
	I1018 15:05:32.332989  340627 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:05:32.334297  340627 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:05:32.335637  340627 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 15:05:32.336885  340627 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:05:32.338196  340627 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:05:32.303687  340611 config.go:182] Loaded profile config "cert-expiration-327346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:05:32.303864  340611 config.go:182] Loaded profile config "no-preload-165275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:05:32.303998  340611 config.go:182] Loaded profile config "old-k8s-version-948537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 15:05:32.304137  340611 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:05:32.330760  340611 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:05:32.330956  340611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:05:32.404568  340611 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-18 15:05:32.390432859 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:05:32.404679  340611 docker.go:318] overlay module found
	I1018 15:05:32.406551  340611 out.go:179] * Using the docker driver based on user configuration
	I1018 15:05:32.340051  340627 config.go:182] Loaded profile config "no-preload-165275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:05:32.340766  340627 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:05:32.372446  340627 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:05:32.372613  340627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:05:32.446400  340627 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-18 15:05:32.430766643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:05:32.446560  340627 docker.go:318] overlay module found
	I1018 15:05:32.448513  340627 out.go:179] * Using the docker driver based on existing profile
	I1018 15:05:32.407972  340611 start.go:305] selected driver: docker
	I1018 15:05:32.407994  340611 start.go:925] validating driver "docker" against <nil>
	I1018 15:05:32.408010  340611 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:05:32.408810  340611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:05:32.476523  340611 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-18 15:05:32.464528039 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:05:32.476767  340611 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 15:05:32.477197  340611 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:05:32.479387  340611 out.go:179] * Using Docker driver with root privileges
	I1018 15:05:32.480779  340611 cni.go:84] Creating CNI manager for ""
	I1018 15:05:32.480844  340611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:05:32.480854  340611 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 15:05:32.481006  340611 start.go:349] cluster config:
	{Name:embed-certs-775590 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-775590 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:05:32.482645  340611 out.go:179] * Starting "embed-certs-775590" primary control-plane node in "embed-certs-775590" cluster
	I1018 15:05:32.484017  340611 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:05:32.485478  340611 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:05:32.489049  340611 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:05:32.489109  340611 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 15:05:32.489138  340611 cache.go:58] Caching tarball of preloaded images
	I1018 15:05:32.489158  340611 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 15:05:32.489267  340611 preload.go:233] Found /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 15:05:32.489282  340611 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 15:05:32.489416  340611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/config.json ...
	I1018 15:05:32.489442  340611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/config.json: {Name:mk27b8d43a78442b684da2a96570796e5d767c0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:05:32.512651  340611 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:05:32.512676  340611 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 15:05:32.512696  340611 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:05:32.512731  340611 start.go:360] acquireMachinesLock for embed-certs-775590: {Name:mk7c2e78c8f1aa9ee940b8ae2274718f1467b317 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.512871  340611 start.go:364] duration metric: took 119.281µs to acquireMachinesLock for "embed-certs-775590"
	I1018 15:05:32.512902  340611 start.go:93] Provisioning new machine with config: &{Name:embed-certs-775590 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-775590 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:05:32.513016  340611 start.go:125] createHost starting for "" (driver="docker")
	I1018 15:05:32.449756  340627 start.go:305] selected driver: docker
	I1018 15:05:32.449791  340627 start.go:925] validating driver "docker" against &{Name:no-preload-165275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-165275 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:05:32.449907  340627 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:05:32.450728  340627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:05:32.516771  340627 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-18 15:05:32.50652424 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:05:32.517130  340627 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:05:32.517162  340627 cni.go:84] Creating CNI manager for ""
	I1018 15:05:32.517230  340627 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:05:32.517297  340627 start.go:349] cluster config:
	{Name:no-preload-165275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-165275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:05:32.519011  340627 out.go:179] * Starting "no-preload-165275" primary control-plane node in "no-preload-165275" cluster
	I1018 15:05:32.520160  340627 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:05:32.521409  340627 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:05:32.522517  340627 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:05:32.522612  340627 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 15:05:32.522673  340627 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/config.json ...
	I1018 15:05:32.522840  340627 cache.go:107] acquiring lock: {Name:mkbaa1a4bd6915358a4926d0351a0e021f54346d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.522958  340627 cache.go:107] acquiring lock: {Name:mk314bda0d4e90238c0ed6d4b64ac6d98bf9f0e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.522979  340627 cache.go:107] acquiring lock: {Name:mkd6be508b79cf0b608e0017623eb5fbcb6b5bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.523032  340627 cache.go:115] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 15:05:32.522985  340627 cache.go:107] acquiring lock: {Name:mk1d022df204329fecb8dfdd48f2e6a2af0f3a7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.523046  340627 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 92.816µs
	I1018 15:05:32.523012  340627 cache.go:107] acquiring lock: {Name:mkecab1d576a5cee47304bc15dc72f9970f45c8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.523063  340627 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 15:05:32.523033  340627 cache.go:115] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 15:05:32.522992  340627 cache.go:107] acquiring lock: {Name:mk12de1c820b10b304bb440284c1b6916a987889 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.523079  340627 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 273.398µs
	I1018 15:05:32.523089  340627 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 15:05:32.522840  340627 cache.go:107] acquiring lock: {Name:mk72463510bc510f518ea67b24aec16a4002f6be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.522842  340627 cache.go:107] acquiring lock: {Name:mkcd0e2847def5d7525f56b72d40ef8eb4661666 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.523208  340627 cache.go:115] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 15:05:32.523217  340627 cache.go:115] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 15:05:32.523227  340627 cache.go:115] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1018 15:05:32.523232  340627 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 299.62µs
	I1018 15:05:32.523240  340627 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 330.8µs
	I1018 15:05:32.523238  340627 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 260.583µs
	I1018 15:05:32.523248  340627 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 15:05:32.523250  340627 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 15:05:32.523254  340627 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 15:05:32.523215  340627 cache.go:115] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 15:05:32.523250  340627 cache.go:115] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 15:05:32.523290  340627 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 327.038µs
	I1018 15:05:32.523247  340627 cache.go:115] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 15:05:32.523302  340627 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 15:05:32.523268  340627 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 486.461µs
	I1018 15:05:32.523317  340627 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 15:05:32.523311  340627 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 499.837µs
	I1018 15:05:32.523340  340627 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 15:05:32.523350  340627 cache.go:87] Successfully saved all images to host disk.
	I1018 15:05:32.544879  340627 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:05:32.544900  340627 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 15:05:32.544927  340627 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:05:32.544961  340627 start.go:360] acquireMachinesLock for no-preload-165275: {Name:mk24a38ac6e4e8fc6cc6d51b67ac49da84578c77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.545019  340627 start.go:364] duration metric: took 38.591µs to acquireMachinesLock for "no-preload-165275"
	I1018 15:05:32.545046  340627 start.go:96] Skipping create...Using existing machine configuration
	I1018 15:05:32.545053  340627 fix.go:54] fixHost starting: 
	I1018 15:05:32.545299  340627 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:05:32.565386  340627 fix.go:112] recreateIfNeeded on no-preload-165275: state=Stopped err=<nil>
	W1018 15:05:32.565420  340627 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 15:05:32.515935  340611 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 15:05:32.516164  340611 start.go:159] libmachine.API.Create for "embed-certs-775590" (driver="docker")
	I1018 15:05:32.516195  340611 client.go:168] LocalClient.Create starting
	I1018 15:05:32.516257  340611 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem
	I1018 15:05:32.516290  340611 main.go:141] libmachine: Decoding PEM data...
	I1018 15:05:32.516306  340611 main.go:141] libmachine: Parsing certificate...
	I1018 15:05:32.516383  340611 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem
	I1018 15:05:32.516414  340611 main.go:141] libmachine: Decoding PEM data...
	I1018 15:05:32.516432  340611 main.go:141] libmachine: Parsing certificate...
	I1018 15:05:32.516820  340611 cli_runner.go:164] Run: docker network inspect embed-certs-775590 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 15:05:32.534835  340611 cli_runner.go:211] docker network inspect embed-certs-775590 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 15:05:32.534970  340611 network_create.go:284] running [docker network inspect embed-certs-775590] to gather additional debugging logs...
	I1018 15:05:32.535001  340611 cli_runner.go:164] Run: docker network inspect embed-certs-775590
	W1018 15:05:32.553238  340611 cli_runner.go:211] docker network inspect embed-certs-775590 returned with exit code 1
	I1018 15:05:32.553270  340611 network_create.go:287] error running [docker network inspect embed-certs-775590]: docker network inspect embed-certs-775590: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-775590 not found
	I1018 15:05:32.553288  340611 network_create.go:289] output of [docker network inspect embed-certs-775590]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-775590 not found
	
	** /stderr **
	I1018 15:05:32.553417  340611 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:05:32.574465  340611 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-67ded9675d49 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:eb:89:76:0f:a6} reservation:<nil>}
	I1018 15:05:32.575099  340611 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4b365c92bc46 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:db:b6:83:36:75} reservation:<nil>}
	I1018 15:05:32.575699  340611 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ab6063c7cdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:eb:32:cc:ab:b4} reservation:<nil>}
	I1018 15:05:32.576712  340611 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002012720}
	I1018 15:05:32.576740  340611 network_create.go:124] attempt to create docker network embed-certs-775590 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 15:05:32.576796  340611 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-775590 embed-certs-775590
	I1018 15:05:32.637990  340611 network_create.go:108] docker network embed-certs-775590 192.168.76.0/24 created
	I1018 15:05:32.638033  340611 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-775590" container
	I1018 15:05:32.638105  340611 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 15:05:32.657500  340611 cli_runner.go:164] Run: docker volume create embed-certs-775590 --label name.minikube.sigs.k8s.io=embed-certs-775590 --label created_by.minikube.sigs.k8s.io=true
	I1018 15:05:32.678859  340611 oci.go:103] Successfully created a docker volume embed-certs-775590
	I1018 15:05:32.678961  340611 cli_runner.go:164] Run: docker run --rm --name embed-certs-775590-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-775590 --entrypoint /usr/bin/test -v embed-certs-775590:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 15:05:33.102810  340611 oci.go:107] Successfully prepared a docker volume embed-certs-775590
	I1018 15:05:33.102874  340611 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:05:33.102900  340611 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 15:05:33.103021  340611 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-775590:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 15:05:32.568243  340627 out.go:252] * Restarting existing docker container for "no-preload-165275" ...
	I1018 15:05:32.568366  340627 cli_runner.go:164] Run: docker start no-preload-165275
	I1018 15:05:32.847388  340627 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:05:32.867717  340627 kic.go:430] container "no-preload-165275" state is running.
	I1018 15:05:32.868127  340627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-165275
	I1018 15:05:32.888987  340627 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/config.json ...
	I1018 15:05:32.889320  340627 machine.go:93] provisionDockerMachine start ...
	I1018 15:05:32.889410  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:32.909562  340627 main.go:141] libmachine: Using SSH client type: native
	I1018 15:05:32.909884  340627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1018 15:05:32.909907  340627 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:05:32.910690  340627 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50340->127.0.0.1:33068: read: connection reset by peer
	I1018 15:05:36.050301  340627 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-165275
	
	I1018 15:05:36.050337  340627 ubuntu.go:182] provisioning hostname "no-preload-165275"
	I1018 15:05:36.050419  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:36.069176  340627 main.go:141] libmachine: Using SSH client type: native
	I1018 15:05:36.069420  340627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1018 15:05:36.069439  340627 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-165275 && echo "no-preload-165275" | sudo tee /etc/hostname
	I1018 15:05:36.268196  340627 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-165275
	
	I1018 15:05:36.268317  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:36.286657  340627 main.go:141] libmachine: Using SSH client type: native
	I1018 15:05:36.286959  340627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1018 15:05:36.286986  340627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-165275' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-165275/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-165275' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:05:36.422886  340627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:05:36.422931  340627 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 15:05:36.422972  340627 ubuntu.go:190] setting up certificates
	I1018 15:05:36.422986  340627 provision.go:84] configureAuth start
	I1018 15:05:36.423042  340627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-165275
	I1018 15:05:36.443238  340627 provision.go:143] copyHostCerts
	I1018 15:05:36.443303  340627 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem, removing ...
	I1018 15:05:36.443319  340627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:05:36.529692  340627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 15:05:36.529899  340627 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem, removing ...
	I1018 15:05:36.529951  340627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:05:36.530009  340627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 15:05:36.530107  340627 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem, removing ...
	I1018 15:05:36.530120  340627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:05:36.530159  340627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 15:05:36.530240  340627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.no-preload-165275 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-165275]
	I1018 15:05:37.004116  340627 provision.go:177] copyRemoteCerts
	I1018 15:05:37.004211  340627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:05:37.004264  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:37.023055  340627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa Username:docker}
	I1018 15:05:37.120287  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 15:05:37.138000  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 15:05:37.156201  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:05:37.174489  340627 provision.go:87] duration metric: took 751.483239ms to configureAuth
	I1018 15:05:37.174516  340627 ubuntu.go:206] setting minikube options for container-runtime
	I1018 15:05:37.174695  340627 config.go:182] Loaded profile config "no-preload-165275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:05:37.174815  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:37.191936  340627 main.go:141] libmachine: Using SSH client type: native
	I1018 15:05:37.192182  340627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1018 15:05:37.192210  340627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:05:37.643604  340627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:05:37.643633  340627 machine.go:96] duration metric: took 4.754293354s to provisionDockerMachine
	I1018 15:05:37.643648  340627 start.go:293] postStartSetup for "no-preload-165275" (driver="docker")
	I1018 15:05:37.643663  340627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:05:37.643726  340627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:05:37.643781  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:37.664082  340627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa Username:docker}
	I1018 15:05:37.763494  340627 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:05:37.767692  340627 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 15:05:37.767729  340627 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 15:05:37.767743  340627 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/addons for local assets ...
	I1018 15:05:37.767815  340627 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/files for local assets ...
	I1018 15:05:37.767982  340627 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem -> 931872.pem in /etc/ssl/certs
	I1018 15:05:37.768120  340627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:05:37.778213  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:05:37.801259  340627 start.go:296] duration metric: took 157.589139ms for postStartSetup
	I1018 15:05:37.801352  340627 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 15:05:37.801421  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:37.820685  340627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa Username:docker}
	I1018 15:05:37.917124  340627 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 15:05:37.922499  340627 fix.go:56] duration metric: took 5.377436687s for fixHost
	I1018 15:05:37.922532  340627 start.go:83] releasing machines lock for "no-preload-165275", held for 5.377501712s
	I1018 15:05:37.922616  340627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-165275
	I1018 15:05:37.942927  340627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:05:37.942988  340627 ssh_runner.go:195] Run: cat /version.json
	I1018 15:05:37.943012  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:37.943046  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:37.964343  340627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa Username:docker}
	I1018 15:05:37.964343  340627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa Username:docker}
	I1018 15:05:38.124894  340627 ssh_runner.go:195] Run: systemctl --version
	I1018 15:05:38.132145  340627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:05:38.182217  340627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:05:38.188148  340627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:05:38.188224  340627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:05:38.199158  340627 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 15:05:38.199186  340627 start.go:495] detecting cgroup driver to use...
	I1018 15:05:38.199223  340627 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 15:05:38.199284  340627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:05:38.215971  340627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:05:38.236162  340627 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:05:38.236228  340627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:05:38.261251  340627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:05:38.279720  340627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:05:38.388421  340627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:05:38.479449  340627 docker.go:234] disabling docker service ...
	I1018 15:05:38.479521  340627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:05:38.496118  340627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:05:38.510612  340627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:05:38.603398  340627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:05:38.699979  340627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:05:38.713119  340627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:05:38.729827  340627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 15:05:38.729897  340627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:38.739883  340627 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 15:05:38.739965  340627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:38.750794  340627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:38.760252  340627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:38.769483  340627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:05:38.778476  340627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:38.788481  340627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:38.797950  340627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:38.807888  340627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:05:38.818370  340627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:05:38.827393  340627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:05:38.918595  340627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:05:39.032614  340627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:05:39.032692  340627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:05:39.038707  340627 start.go:563] Will wait 60s for crictl version
	I1018 15:05:39.038770  340627 ssh_runner.go:195] Run: which crictl
	I1018 15:05:39.043453  340627 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 15:05:39.073719  340627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 15:05:39.073797  340627 ssh_runner.go:195] Run: crio --version
	I1018 15:05:39.104591  340627 ssh_runner.go:195] Run: crio --version
	I1018 15:05:39.142135  340627 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 15:05:39.143477  340627 cli_runner.go:164] Run: docker network inspect no-preload-165275 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:05:39.165434  340627 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 15:05:39.170203  340627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:05:39.181542  340627 kubeadm.go:883] updating cluster {Name:no-preload-165275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-165275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 15:05:39.181646  340627 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:05:39.181686  340627 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:05:39.221263  340627 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:05:39.221285  340627 cache_images.go:85] Images are preloaded, skipping loading
	I1018 15:05:39.221293  340627 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 15:05:39.221422  340627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-165275 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-165275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 15:05:39.221506  340627 ssh_runner.go:195] Run: crio config
	I1018 15:05:39.271048  340627 cni.go:84] Creating CNI manager for ""
	I1018 15:05:39.271076  340627 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:05:39.271099  340627 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 15:05:39.271137  340627 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-165275 NodeName:no-preload-165275 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 15:05:39.271310  340627 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-165275"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 15:05:39.271386  340627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 15:05:39.279819  340627 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 15:05:39.279890  340627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 15:05:39.287658  340627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 15:05:39.301397  340627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 15:05:39.314486  340627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1018 15:05:39.328119  340627 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 15:05:39.332360  340627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:05:39.342507  340627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:05:39.438176  340627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:05:39.463576  340627 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275 for IP: 192.168.85.2
	I1018 15:05:39.463598  340627 certs.go:195] generating shared ca certs ...
	I1018 15:05:39.463638  340627 certs.go:227] acquiring lock for ca certs: {Name:mk6d3b4e44856f498157352ffe8bb89d2c7a3998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:05:39.463797  340627 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key
	I1018 15:05:39.463844  340627 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key
	I1018 15:05:39.463853  340627 certs.go:257] generating profile certs ...
	I1018 15:05:39.463978  340627 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/client.key
	I1018 15:05:39.464052  340627 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/apiserver.key.93e76921
	I1018 15:05:39.464175  340627 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/proxy-client.key
	I1018 15:05:39.464349  340627 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem (1338 bytes)
	W1018 15:05:39.464400  340627 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187_empty.pem, impossibly tiny 0 bytes
	I1018 15:05:39.464415  340627 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 15:05:39.464449  340627 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem (1082 bytes)
	I1018 15:05:39.464482  340627 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem (1123 bytes)
	I1018 15:05:39.464508  340627 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem (1675 bytes)
	I1018 15:05:39.464562  340627 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:05:39.465524  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 15:05:39.485814  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 15:05:39.508418  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 15:05:39.530869  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 15:05:39.558492  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 15:05:39.581555  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 15:05:39.601305  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 15:05:39.619241  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 15:05:39.637505  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 15:05:39.655288  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem --> /usr/share/ca-certificates/93187.pem (1338 bytes)
	I1018 15:05:39.674118  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /usr/share/ca-certificates/931872.pem (1708 bytes)
	I1018 15:05:39.693640  340627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 15:05:39.707282  340627 ssh_runner.go:195] Run: openssl version
	I1018 15:05:39.713363  340627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93187.pem && ln -fs /usr/share/ca-certificates/93187.pem /etc/ssl/certs/93187.pem"
	I1018 15:05:39.722231  340627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93187.pem
	I1018 15:05:39.725949  340627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 14:26 /usr/share/ca-certificates/93187.pem
	I1018 15:05:39.725999  340627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93187.pem
	I1018 15:05:39.763857  340627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93187.pem /etc/ssl/certs/51391683.0"
	I1018 15:05:39.772320  340627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/931872.pem && ln -fs /usr/share/ca-certificates/931872.pem /etc/ssl/certs/931872.pem"
	I1018 15:05:39.780898  340627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/931872.pem
	I1018 15:05:39.784868  340627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 14:26 /usr/share/ca-certificates/931872.pem
	I1018 15:05:39.784947  340627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/931872.pem
	I1018 15:05:39.822389  340627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/931872.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 15:05:39.832272  340627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 15:05:39.843113  340627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:05:39.847388  340627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:05:39.847447  340627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:05:39.887626  340627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 15:05:39.895862  340627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 15:05:39.899716  340627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 15:05:39.936443  340627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 15:05:39.979577  340627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 15:05:40.037866  340627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 15:05:40.087924  340627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 15:05:40.148093  340627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 15:05:40.213284  340627 kubeadm.go:400] StartCluster: {Name:no-preload-165275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-165275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:05:40.213388  340627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 15:05:40.213444  340627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 15:05:40.252230  340627 cri.go:89] found id: "d37cf270acf4cdb482c3d7fdb5fa2e8ecdf544a1b1172db005a424e0b482c119"
	I1018 15:05:40.252255  340627 cri.go:89] found id: "ce5891388244aaa439d0521f1c59f74520a5be8cfe55bae6fec434a5125ea972"
	I1018 15:05:40.252261  340627 cri.go:89] found id: "c1d28d4d24c3ece98b690ada9bd56a5d7ebdd925b9e2320e8f7d9f1b62f77b34"
	I1018 15:05:40.252265  340627 cri.go:89] found id: "3e2c583673b99348bde570e54f1913de407877ce7969439954326ffcf6f4fc31"
	I1018 15:05:40.252364  340627 cri.go:89] found id: ""
	I1018 15:05:40.252420  340627 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 15:05:40.274776  340627 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:05:40Z" level=error msg="open /run/runc: no such file or directory"
	I1018 15:05:40.274886  340627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 15:05:40.285663  340627 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 15:05:40.285683  340627 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 15:05:40.285728  340627 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 15:05:40.295612  340627 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 15:05:40.296455  340627 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-165275" does not appear in /home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:05:40.296980  340627 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-89690/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-165275" cluster setting kubeconfig missing "no-preload-165275" context setting]
	I1018 15:05:40.297754  340627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/kubeconfig: {Name:mkdc6fbc9be8e57447490b30dd5bb0086611876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:05:40.299709  340627 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 15:05:40.309547  340627 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 15:05:40.309578  340627 kubeadm.go:601] duration metric: took 23.888293ms to restartPrimaryControlPlane
	I1018 15:05:40.309587  340627 kubeadm.go:402] duration metric: took 96.31728ms to StartCluster
	I1018 15:05:40.309604  340627 settings.go:142] acquiring lock: {Name:mk9d9d72167811b3554435e137ed17137bf54a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:05:40.309667  340627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:05:40.311107  340627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/kubeconfig: {Name:mkdc6fbc9be8e57447490b30dd5bb0086611876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:05:40.311348  340627 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:05:40.311485  340627 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 15:05:40.311597  340627 config.go:182] Loaded profile config "no-preload-165275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:05:40.311602  340627 addons.go:69] Setting storage-provisioner=true in profile "no-preload-165275"
	I1018 15:05:40.311622  340627 addons.go:238] Setting addon storage-provisioner=true in "no-preload-165275"
	W1018 15:05:40.311630  340627 addons.go:247] addon storage-provisioner should already be in state true
	I1018 15:05:40.311634  340627 addons.go:69] Setting dashboard=true in profile "no-preload-165275"
	I1018 15:05:40.311641  340627 addons.go:69] Setting default-storageclass=true in profile "no-preload-165275"
	I1018 15:05:40.311654  340627 addons.go:238] Setting addon dashboard=true in "no-preload-165275"
	I1018 15:05:40.311654  340627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-165275"
	W1018 15:05:40.311663  340627 addons.go:247] addon dashboard should already be in state true
	I1018 15:05:40.311664  340627 host.go:66] Checking if "no-preload-165275" exists ...
	I1018 15:05:40.311691  340627 host.go:66] Checking if "no-preload-165275" exists ...
	I1018 15:05:40.312013  340627 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:05:40.312182  340627 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:05:40.312205  340627 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:05:40.314684  340627 out.go:179] * Verifying Kubernetes components...
	I1018 15:05:40.315776  340627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:05:40.337955  340627 addons.go:238] Setting addon default-storageclass=true in "no-preload-165275"
	W1018 15:05:40.337981  340627 addons.go:247] addon default-storageclass should already be in state true
	I1018 15:05:40.338010  340627 host.go:66] Checking if "no-preload-165275" exists ...
	I1018 15:05:40.338471  340627 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:05:40.339513  340627 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 15:05:40.341027  340627 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 15:05:40.341044  340627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 15:05:40.341095  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:40.342905  340627 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 15:05:40.344356  340627 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 15:05:37.583764  340611 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-775590:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.480680449s)
	I1018 15:05:37.583796  340611 kic.go:203] duration metric: took 4.480892265s to extract preloaded images to volume ...
	W1018 15:05:37.583893  340611 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 15:05:37.583970  340611 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 15:05:37.584015  340611 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 15:05:37.649279  340611 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-775590 --name embed-certs-775590 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-775590 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-775590 --network embed-certs-775590 --ip 192.168.76.2 --volume embed-certs-775590:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 15:05:37.935219  340611 cli_runner.go:164] Run: docker container inspect embed-certs-775590 --format={{.State.Running}}
	I1018 15:05:37.956184  340611 cli_runner.go:164] Run: docker container inspect embed-certs-775590 --format={{.State.Status}}
	I1018 15:05:37.980042  340611 cli_runner.go:164] Run: docker exec embed-certs-775590 stat /var/lib/dpkg/alternatives/iptables
	I1018 15:05:38.029998  340611 oci.go:144] the created container "embed-certs-775590" has a running status.
	I1018 15:05:38.030041  340611 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/embed-certs-775590/id_rsa...
	I1018 15:05:38.171566  340611 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-89690/.minikube/machines/embed-certs-775590/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 15:05:38.202898  340611 cli_runner.go:164] Run: docker container inspect embed-certs-775590 --format={{.State.Status}}
	I1018 15:05:38.225241  340611 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 15:05:38.225266  340611 kic_runner.go:114] Args: [docker exec --privileged embed-certs-775590 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 15:05:38.281763  340611 cli_runner.go:164] Run: docker container inspect embed-certs-775590 --format={{.State.Status}}
	I1018 15:05:38.302452  340611 machine.go:93] provisionDockerMachine start ...
	I1018 15:05:38.302634  340611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:05:38.332873  340611 main.go:141] libmachine: Using SSH client type: native
	I1018 15:05:38.333272  340611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1018 15:05:38.333294  340611 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:05:38.479153  340611 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-775590
	
	I1018 15:05:38.479189  340611 ubuntu.go:182] provisioning hostname "embed-certs-775590"
	I1018 15:05:38.479256  340611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:05:38.499392  340611 main.go:141] libmachine: Using SSH client type: native
	I1018 15:05:38.499704  340611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1018 15:05:38.499731  340611 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-775590 && echo "embed-certs-775590" | sudo tee /etc/hostname
	I1018 15:05:38.657592  340611 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-775590
	
	I1018 15:05:38.657679  340611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:05:38.677976  340611 main.go:141] libmachine: Using SSH client type: native
	I1018 15:05:38.678265  340611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1018 15:05:38.678292  340611 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-775590' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-775590/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-775590' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:05:38.814788  340611 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:05:38.814829  340611 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 15:05:38.814888  340611 ubuntu.go:190] setting up certificates
	I1018 15:05:38.814902  340611 provision.go:84] configureAuth start
	I1018 15:05:38.814995  340611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-775590
	I1018 15:05:38.837234  340611 provision.go:143] copyHostCerts
	I1018 15:05:38.837390  340611 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem, removing ...
	I1018 15:05:38.837407  340611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:05:38.837488  340611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 15:05:38.837589  340611 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem, removing ...
	I1018 15:05:38.837598  340611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:05:38.837631  340611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 15:05:38.837696  340611 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem, removing ...
	I1018 15:05:38.837705  340611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:05:38.837733  340611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 15:05:38.837790  340611 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.embed-certs-775590 san=[127.0.0.1 192.168.76.2 embed-certs-775590 localhost minikube]
	I1018 15:05:39.023235  340611 provision.go:177] copyRemoteCerts
	I1018 15:05:39.023301  340611 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:05:39.023356  340611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:05:39.045227  340611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/embed-certs-775590/id_rsa Username:docker}
	I1018 15:05:39.147568  340611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:05:39.170895  340611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 15:05:39.189996  340611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 15:05:39.209945  340611 provision.go:87] duration metric: took 394.999837ms to configureAuth
	I1018 15:05:39.209976  340611 ubuntu.go:206] setting minikube options for container-runtime
	I1018 15:05:39.210176  340611 config.go:182] Loaded profile config "embed-certs-775590": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:05:39.210297  340611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:05:39.229897  340611 main.go:141] libmachine: Using SSH client type: native
	I1018 15:05:39.230220  340611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1018 15:05:39.230247  340611 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:05:39.505366  340611 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:05:39.505396  340611 machine.go:96] duration metric: took 1.202918838s to provisionDockerMachine
	I1018 15:05:39.505408  340611 client.go:171] duration metric: took 6.989205013s to LocalClient.Create
	I1018 15:05:39.505432  340611 start.go:167] duration metric: took 6.989268605s to libmachine.API.Create "embed-certs-775590"
	I1018 15:05:39.505442  340611 start.go:293] postStartSetup for "embed-certs-775590" (driver="docker")
	I1018 15:05:39.505456  340611 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:05:39.505543  340611 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:05:39.505600  340611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:05:39.526240  340611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/embed-certs-775590/id_rsa Username:docker}
	I1018 15:05:39.630956  340611 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:05:39.634771  340611 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 15:05:39.634800  340611 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 15:05:39.634812  340611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/addons for local assets ...
	I1018 15:05:39.634867  340611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/files for local assets ...
	I1018 15:05:39.635004  340611 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem -> 931872.pem in /etc/ssl/certs
	I1018 15:05:39.635109  340611 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:05:39.642637  340611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:05:39.663296  340611 start.go:296] duration metric: took 157.840714ms for postStartSetup
	I1018 15:05:39.663615  340611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-775590
	I1018 15:05:39.683718  340611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/config.json ...
	I1018 15:05:39.684021  340611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 15:05:39.684065  340611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:05:39.702196  340611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/embed-certs-775590/id_rsa Username:docker}
	I1018 15:05:39.796902  340611 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 15:05:39.801723  340611 start.go:128] duration metric: took 7.288688955s to createHost
	I1018 15:05:39.801748  340611 start.go:83] releasing machines lock for "embed-certs-775590", held for 7.288862673s
	I1018 15:05:39.801826  340611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-775590
	I1018 15:05:39.819525  340611 ssh_runner.go:195] Run: cat /version.json
	I1018 15:05:39.819554  340611 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:05:39.819571  340611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:05:39.819624  340611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:05:39.838577  340611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/embed-certs-775590/id_rsa Username:docker}
	I1018 15:05:39.839489  340611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/embed-certs-775590/id_rsa Username:docker}
	I1018 15:05:40.009479  340611 ssh_runner.go:195] Run: systemctl --version
	I1018 15:05:40.018349  340611 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:05:40.068862  340611 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:05:40.075811  340611 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:05:40.075965  340611 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:05:40.123707  340611 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 15:05:40.123753  340611 start.go:495] detecting cgroup driver to use...
	I1018 15:05:40.123788  340611 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 15:05:40.123971  340611 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:05:40.154768  340611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:05:40.175094  340611 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:05:40.175180  340611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:05:40.201164  340611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:05:40.224895  340611 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:05:40.350984  340611 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:05:40.503199  340611 docker.go:234] disabling docker service ...
	I1018 15:05:40.503358  340611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:05:40.535706  340611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:05:40.560418  340611 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:05:40.691319  340611 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:05:40.809367  340611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:05:40.826100  340611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:05:40.845820  340611 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 15:05:40.845999  340611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:40.857815  340611 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 15:05:40.857886  340611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:40.870602  340611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:40.883267  340611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:40.902650  340611 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:05:40.912965  340611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:40.922684  340611 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:40.940803  340611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:40.952650  340611 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:05:40.963206  340611 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:05:40.975731  340611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:05:41.104233  340611 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:05:41.254742  340611 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:05:41.254815  340611 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:05:41.260028  340611 start.go:563] Will wait 60s for crictl version
	I1018 15:05:41.260099  340611 ssh_runner.go:195] Run: which crictl
	I1018 15:05:41.264817  340611 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 15:05:41.297600  340611 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 15:05:41.297678  340611 ssh_runner.go:195] Run: crio --version
	I1018 15:05:41.335197  340611 ssh_runner.go:195] Run: crio --version
	I1018 15:05:41.375530  340611 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 18 15:05:05 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:05.549702513Z" level=info msg="Created container ca59ac639c4af3d27021b467cc03eca4d72a3f9c7d8418fc024c78d9006549fe: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fsjwd/kubernetes-dashboard" id=eec66f6f-0020-41b2-8451-4bff97a5dec7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:05:05 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:05.550286736Z" level=info msg="Starting container: ca59ac639c4af3d27021b467cc03eca4d72a3f9c7d8418fc024c78d9006549fe" id=4e0a8c74-dc33-4bf6-90ec-fd038c71a8f2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:05:05 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:05.55217582Z" level=info msg="Started container" PID=1742 containerID=ca59ac639c4af3d27021b467cc03eca4d72a3f9c7d8418fc024c78d9006549fe description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fsjwd/kubernetes-dashboard id=4e0a8c74-dc33-4bf6-90ec-fd038c71a8f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=52832e2207fcd42f0c4d275f1d6a6eb49814e0649b34072ddd43432ab105c8b4
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.874378719Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0b85fa3b-4fa5-4023-8fb7-1a19d39391a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.875337619Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7d7f31fc-e0df-4018-8cfa-3a34d2f2ce86 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.876482003Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=22e2a9e5-223d-42ed-bf63-b346b8e4c6a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.876773778Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.881261475Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.881540567Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a702a0bb435c724b11ca071388b959d60df1e8a255f08da39d54ea27303fed6c/merged/etc/passwd: no such file or directory"
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.881634027Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a702a0bb435c724b11ca071388b959d60df1e8a255f08da39d54ea27303fed6c/merged/etc/group: no such file or directory"
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.881991398Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.924394837Z" level=info msg="Created container 067057e99a71c35cef6be48c228170e8a97bf712bc8e81bb891f09faeeff93cf: kube-system/storage-provisioner/storage-provisioner" id=22e2a9e5-223d-42ed-bf63-b346b8e4c6a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.925144268Z" level=info msg="Starting container: 067057e99a71c35cef6be48c228170e8a97bf712bc8e81bb891f09faeeff93cf" id=e0a2d3a4-6ddf-457f-a424-697c79b20990 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.92726014Z" level=info msg="Started container" PID=1770 containerID=067057e99a71c35cef6be48c228170e8a97bf712bc8e81bb891f09faeeff93cf description=kube-system/storage-provisioner/storage-provisioner id=e0a2d3a4-6ddf-457f-a424-697c79b20990 name=/runtime.v1.RuntimeService/StartContainer sandboxID=385161ffc9351d2c6def8a9233a0080eeb73531edddc365b943cd2d5422d9889
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.745878875Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d114dea8-8958-4f96-9698-afdedd02f4e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.746799818Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2a578fe7-fbd2-4e9a-8f8f-5888434a20c7 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.750482413Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w/dashboard-metrics-scraper" id=e3c3771f-41fc-40f4-8f16-255c19c102c0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.750846201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.761457382Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.762097591Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.803004063Z" level=info msg="Created container 2d707f4e636f3c33af0939dcc558810386c9490c2aadb8081a1acd6065892e82: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w/dashboard-metrics-scraper" id=e3c3771f-41fc-40f4-8f16-255c19c102c0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.803771023Z" level=info msg="Starting container: 2d707f4e636f3c33af0939dcc558810386c9490c2aadb8081a1acd6065892e82" id=9dbf9375-5856-4702-bb43-10312d308d16 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.806608736Z" level=info msg="Started container" PID=1785 containerID=2d707f4e636f3c33af0939dcc558810386c9490c2aadb8081a1acd6065892e82 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w/dashboard-metrics-scraper id=9dbf9375-5856-4702-bb43-10312d308d16 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c6079cf274a1ea30a4f60de6c21e4edcfb9bbd35c675c40b8ea1fbc86973d2d
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.889769807Z" level=info msg="Removing container: 44d616f811d03ec4fe2e458514a60eb296291128a62959a31cbf8d5a20cdd3b7" id=49a47c26-6c4e-421e-8eb8-d9c3014525ef name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.901115602Z" level=info msg="Removed container 44d616f811d03ec4fe2e458514a60eb296291128a62959a31cbf8d5a20cdd3b7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w/dashboard-metrics-scraper" id=49a47c26-6c4e-421e-8eb8-d9c3014525ef name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	2d707f4e636f3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   3c6079cf274a1       dashboard-metrics-scraper-5f989dc9cf-h786w       kubernetes-dashboard
	067057e99a71c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   385161ffc9351       storage-provisioner                              kube-system
	ca59ac639c4af       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   36 seconds ago      Running             kubernetes-dashboard        0                   52832e2207fcd       kubernetes-dashboard-8694d4445c-fsjwd            kubernetes-dashboard
	6b4b5c46eb7c0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           53 seconds ago      Running             coredns                     0                   214f7f9f0fe13       coredns-5dd5756b68-j8xvf                         kube-system
	fb5d3cda7b7d3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   cf8a166ceeea4       busybox                                          default
	67ecafd74cf06       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   b07a8572172d8       kindnet-xwd4j                                    kube-system
	f6b23d7900af3       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           53 seconds ago      Running             kube-proxy                  0                   dfc1099c3067b       kube-proxy-kwt74                                 kube-system
	52b03114a7d11       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   385161ffc9351       storage-provisioner                              kube-system
	66072254c9bf6       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           57 seconds ago      Running             kube-scheduler              0                   b30ba162954d0       kube-scheduler-old-k8s-version-948537            kube-system
	44dad120630eb       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           57 seconds ago      Running             kube-controller-manager     0                   c646ecf8ad549       kube-controller-manager-old-k8s-version-948537   kube-system
	c6c9f1798915d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           57 seconds ago      Running             etcd                        0                   08e11699895bd       etcd-old-k8s-version-948537                      kube-system
	851f6b38dcd85       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           57 seconds ago      Running             kube-apiserver              0                   7648ee04f4961       kube-apiserver-old-k8s-version-948537            kube-system
	
	
	==> coredns [6b4b5c46eb7c020c11c44ffc6289452f21552a034d98560f814fd10cd937517d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52729 - 43173 "HINFO IN 1963076601915104059.3394339738268485656. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.09531895s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-948537
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-948537
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=old-k8s-version-948537
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_03_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:03:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-948537
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:05:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:05:17 +0000   Sat, 18 Oct 2025 15:03:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:05:17 +0000   Sat, 18 Oct 2025 15:03:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:05:17 +0000   Sat, 18 Oct 2025 15:03:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 15:05:17 +0000   Sat, 18 Oct 2025 15:04:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-948537
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                47943eca-9697-4781-a55f-5b00086edf55
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-j8xvf                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-old-k8s-version-948537                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m5s
	  kube-system                 kindnet-xwd4j                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-948537             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-old-k8s-version-948537    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-proxy-kwt74                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-old-k8s-version-948537             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-h786w        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-fsjwd             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 111s                   kube-proxy       
	  Normal  Starting                 54s                    kube-proxy       
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-948537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-948537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-948537 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m5s                   kubelet          Node old-k8s-version-948537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m5s                   kubelet          Node old-k8s-version-948537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m5s                   kubelet          Node old-k8s-version-948537 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m5s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s                   node-controller  Node old-k8s-version-948537 event: Registered Node old-k8s-version-948537 in Controller
	  Normal  NodeReady                98s                    kubelet          Node old-k8s-version-948537 status is now: NodeReady
	  Normal  Starting                 59s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x9 over 59s)      kubelet          Node old-k8s-version-948537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)      kubelet          Node old-k8s-version-948537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x7 over 59s)      kubelet          Node old-k8s-version-948537 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                    node-controller  Node old-k8s-version-948537 event: Registered Node old-k8s-version-948537 in Controller
	
	
	==> dmesg <==
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [c6c9f1798915d53f9ebc8eea360ea84ac0d228a2a817fa4a501701022703284a] <==
	{"level":"info","ts":"2025-10-18T15:04:44.33544Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T15:04:44.335515Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T15:04:44.337538Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T15:04:44.338074Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T15:04:44.337694Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-18T15:04:44.338894Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-18T15:04:44.338745Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T15:04:45.825183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T15:04:45.825225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T15:04:45.825255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-18T15:04:45.825267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T15:04:45.825272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-18T15:04:45.82528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-10-18T15:04:45.825287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-18T15:04:45.826163Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-948537 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T15:04:45.826196Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T15:04:45.826185Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T15:04:45.826307Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T15:04:45.826337Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T15:04:45.827985Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-10-18T15:04:45.82841Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-18T15:05:36.619843Z","caller":"traceutil/trace.go:171","msg":"trace[485509943] linearizableReadLoop","detail":"{readStateIndex:696; appliedIndex:695; }","duration":"214.630312ms","start":"2025-10-18T15:05:36.405187Z","end":"2025-10-18T15:05:36.619817Z","steps":["trace[485509943] 'read index received'  (duration: 133.835413ms)","trace[485509943] 'applied index is now lower than readState.Index'  (duration: 80.794144ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T15:05:36.61987Z","caller":"traceutil/trace.go:171","msg":"trace[1343467923] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"216.839382ms","start":"2025-10-18T15:05:36.403003Z","end":"2025-10-18T15:05:36.619842Z","steps":["trace[1343467923] 'process raft request'  (duration: 136.039992ms)","trace[1343467923] 'compare'  (duration: 80.653431ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T15:05:36.620107Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.913224ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1122"}
	{"level":"info","ts":"2025-10-18T15:05:36.620198Z","caller":"traceutil/trace.go:171","msg":"trace[1941750066] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:658; }","duration":"215.024147ms","start":"2025-10-18T15:05:36.405161Z","end":"2025-10-18T15:05:36.620185Z","steps":["trace[1941750066] 'agreement among raft nodes before linearized reading'  (duration: 214.74695ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:05:42 up  2:48,  0 user,  load average: 3.62, 2.83, 1.90
	Linux old-k8s-version-948537 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [67ecafd74cf06e59fa294c1705e72d6c1eee8307b1739175eda1df37d8321210] <==
	I1018 15:04:48.348584       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 15:04:48.348861       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 15:04:48.349041       1 main.go:148] setting mtu 1500 for CNI 
	I1018 15:04:48.349067       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 15:04:48.349092       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T15:04:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 15:04:48.548890       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 15:04:48.549002       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 15:04:48.549039       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 15:04:48.688618       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 15:04:48.889683       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 15:04:48.889734       1 metrics.go:72] Registering metrics
	I1018 15:04:48.890640       1 controller.go:711] "Syncing nftables rules"
	I1018 15:04:58.549703       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:04:58.549802       1 main.go:301] handling current node
	I1018 15:05:08.550410       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:05:08.550456       1 main.go:301] handling current node
	I1018 15:05:18.549020       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:05:18.549077       1 main.go:301] handling current node
	I1018 15:05:28.551045       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:05:28.551099       1 main.go:301] handling current node
	I1018 15:05:38.556063       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:05:38.556099       1 main.go:301] handling current node
	
	
	==> kube-apiserver [851f6b38dcd85d53e129d77afb0ca322c1c82f4dcc331a5606dc1cbaa443e3f6] <==
	I1018 15:04:46.853511       1 shared_informer.go:318] Caches are synced for configmaps
	I1018 15:04:46.853559       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 15:04:46.853578       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1018 15:04:46.853712       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 15:04:46.853808       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 15:04:46.853924       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1018 15:04:46.854705       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 15:04:46.854748       1 aggregator.go:166] initial CRD sync complete...
	I1018 15:04:46.854760       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 15:04:46.854767       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 15:04:46.854788       1 cache.go:39] Caches are synced for autoregister controller
	E1018 15:04:46.858822       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 15:04:46.885851       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1018 15:04:47.689190       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 15:04:47.722775       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 15:04:47.750567       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:04:47.759132       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:04:47.763071       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:04:47.772488       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 15:04:47.833092       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.92.9"}
	I1018 15:04:47.847661       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.244.46"}
	I1018 15:04:59.065827       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 15:04:59.065870       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 15:04:59.278431       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 15:04:59.329348       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [44dad120630eb2d0733b71694fa13433f00c53f74453d3fb34d10d2c5e2c1174] <==
	I1018 15:04:59.378173       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1018 15:04:59.388967       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="297.762784ms"
	I1018 15:04:59.389102       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.74µs"
	I1018 15:04:59.392090       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-h786w"
	I1018 15:04:59.392987       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-fsjwd"
	I1018 15:04:59.399526       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="31.830764ms"
	I1018 15:04:59.400748       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="22.829975ms"
	I1018 15:04:59.406452       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.651517ms"
	I1018 15:04:59.406533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="42.412µs"
	I1018 15:04:59.407789       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="8.221021ms"
	I1018 15:04:59.407863       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="39.528µs"
	I1018 15:04:59.410070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.041µs"
	I1018 15:04:59.418632       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="68.747µs"
	I1018 15:04:59.596140       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 15:04:59.663714       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 15:04:59.663743       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 15:05:02.837391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.482µs"
	I1018 15:05:03.846266       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.186µs"
	I1018 15:05:04.848454       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="112.324µs"
	I1018 15:05:05.864525       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.963902ms"
	I1018 15:05:05.864640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="60.995µs"
	I1018 15:05:23.902096       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.262µs"
	I1018 15:05:25.648440       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.894799ms"
	I1018 15:05:25.648565       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.517µs"
	I1018 15:05:29.711208       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="97.072µs"
	
	
	==> kube-proxy [f6b23d7900af3b31399d5fe6ff8b1e0a4f89b0cb9d8e045f2c6bf85fc2a3c4da] <==
	I1018 15:04:48.145063       1 server_others.go:69] "Using iptables proxy"
	I1018 15:04:48.155222       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1018 15:04:48.175297       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 15:04:48.177664       1 server_others.go:152] "Using iptables Proxier"
	I1018 15:04:48.177702       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 15:04:48.177711       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 15:04:48.177743       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 15:04:48.178051       1 server.go:846] "Version info" version="v1.28.0"
	I1018 15:04:48.178067       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:04:48.178571       1 config.go:97] "Starting endpoint slice config controller"
	I1018 15:04:48.178581       1 config.go:188] "Starting service config controller"
	I1018 15:04:48.178600       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 15:04:48.178602       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 15:04:48.178607       1 config.go:315] "Starting node config controller"
	I1018 15:04:48.178623       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 15:04:48.278869       1 shared_informer.go:318] Caches are synced for service config
	I1018 15:04:48.278898       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1018 15:04:48.278879       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [66072254c9bf69ad4fa0d45670ab4ee9fbc8ac23b9081209ca73e1a08513bb77] <==
	I1018 15:04:44.810013       1 serving.go:348] Generated self-signed cert in-memory
	W1018 15:04:46.795357       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 15:04:46.795485       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 15:04:46.795533       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 15:04:46.795587       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 15:04:46.811315       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1018 15:04:46.811344       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:04:46.812949       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:04:46.812986       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1018 15:04:46.814126       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1018 15:04:46.814162       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1018 15:04:46.913649       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 15:04:59 old-k8s-version-948537 kubelet[725]: I1018 15:04:59.563508     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e5136374-8aee-44ed-af01-888265e276e1-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-h786w\" (UID: \"e5136374-8aee-44ed-af01-888265e276e1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w"
	Oct 18 15:04:59 old-k8s-version-948537 kubelet[725]: I1018 15:04:59.563577     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gpcq\" (UniqueName: \"kubernetes.io/projected/73d6354b-baf5-405e-9584-b844619eb7e4-kube-api-access-9gpcq\") pod \"kubernetes-dashboard-8694d4445c-fsjwd\" (UID: \"73d6354b-baf5-405e-9584-b844619eb7e4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fsjwd"
	Oct 18 15:04:59 old-k8s-version-948537 kubelet[725]: I1018 15:04:59.563779     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/73d6354b-baf5-405e-9584-b844619eb7e4-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-fsjwd\" (UID: \"73d6354b-baf5-405e-9584-b844619eb7e4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fsjwd"
	Oct 18 15:04:59 old-k8s-version-948537 kubelet[725]: I1018 15:04:59.563838     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d954x\" (UniqueName: \"kubernetes.io/projected/e5136374-8aee-44ed-af01-888265e276e1-kube-api-access-d954x\") pod \"dashboard-metrics-scraper-5f989dc9cf-h786w\" (UID: \"e5136374-8aee-44ed-af01-888265e276e1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w"
	Oct 18 15:05:02 old-k8s-version-948537 kubelet[725]: I1018 15:05:02.826771     725 scope.go:117] "RemoveContainer" containerID="bd805eb7955df2416c619e6863711d56ad5d28a983f416cd7798dfd897124e59"
	Oct 18 15:05:03 old-k8s-version-948537 kubelet[725]: I1018 15:05:03.831511     725 scope.go:117] "RemoveContainer" containerID="bd805eb7955df2416c619e6863711d56ad5d28a983f416cd7798dfd897124e59"
	Oct 18 15:05:03 old-k8s-version-948537 kubelet[725]: I1018 15:05:03.831749     725 scope.go:117] "RemoveContainer" containerID="44d616f811d03ec4fe2e458514a60eb296291128a62959a31cbf8d5a20cdd3b7"
	Oct 18 15:05:03 old-k8s-version-948537 kubelet[725]: E1018 15:05:03.832121     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h786w_kubernetes-dashboard(e5136374-8aee-44ed-af01-888265e276e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w" podUID="e5136374-8aee-44ed-af01-888265e276e1"
	Oct 18 15:05:04 old-k8s-version-948537 kubelet[725]: I1018 15:05:04.835497     725 scope.go:117] "RemoveContainer" containerID="44d616f811d03ec4fe2e458514a60eb296291128a62959a31cbf8d5a20cdd3b7"
	Oct 18 15:05:04 old-k8s-version-948537 kubelet[725]: E1018 15:05:04.835879     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h786w_kubernetes-dashboard(e5136374-8aee-44ed-af01-888265e276e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w" podUID="e5136374-8aee-44ed-af01-888265e276e1"
	Oct 18 15:05:05 old-k8s-version-948537 kubelet[725]: I1018 15:05:05.852475     725 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fsjwd" podStartSLOduration=1.056866081 podCreationTimestamp="2025-10-18 15:04:59 +0000 UTC" firstStartedPulling="2025-10-18 15:04:59.724506797 +0000 UTC m=+16.089795274" lastFinishedPulling="2025-10-18 15:05:05.520055919 +0000 UTC m=+21.885344393" observedRunningTime="2025-10-18 15:05:05.852299221 +0000 UTC m=+22.217587705" watchObservedRunningTime="2025-10-18 15:05:05.8524152 +0000 UTC m=+22.217703685"
	Oct 18 15:05:09 old-k8s-version-948537 kubelet[725]: I1018 15:05:09.701004     725 scope.go:117] "RemoveContainer" containerID="44d616f811d03ec4fe2e458514a60eb296291128a62959a31cbf8d5a20cdd3b7"
	Oct 18 15:05:09 old-k8s-version-948537 kubelet[725]: E1018 15:05:09.701461     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h786w_kubernetes-dashboard(e5136374-8aee-44ed-af01-888265e276e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w" podUID="e5136374-8aee-44ed-af01-888265e276e1"
	Oct 18 15:05:18 old-k8s-version-948537 kubelet[725]: I1018 15:05:18.873350     725 scope.go:117] "RemoveContainer" containerID="52b03114a7d11a70da29b03a2cdcf4e45d69beb3474365226e6d235c2df948ef"
	Oct 18 15:05:23 old-k8s-version-948537 kubelet[725]: I1018 15:05:23.744855     725 scope.go:117] "RemoveContainer" containerID="44d616f811d03ec4fe2e458514a60eb296291128a62959a31cbf8d5a20cdd3b7"
	Oct 18 15:05:23 old-k8s-version-948537 kubelet[725]: I1018 15:05:23.888435     725 scope.go:117] "RemoveContainer" containerID="44d616f811d03ec4fe2e458514a60eb296291128a62959a31cbf8d5a20cdd3b7"
	Oct 18 15:05:23 old-k8s-version-948537 kubelet[725]: I1018 15:05:23.888639     725 scope.go:117] "RemoveContainer" containerID="2d707f4e636f3c33af0939dcc558810386c9490c2aadb8081a1acd6065892e82"
	Oct 18 15:05:23 old-k8s-version-948537 kubelet[725]: E1018 15:05:23.889039     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h786w_kubernetes-dashboard(e5136374-8aee-44ed-af01-888265e276e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w" podUID="e5136374-8aee-44ed-af01-888265e276e1"
	Oct 18 15:05:29 old-k8s-version-948537 kubelet[725]: I1018 15:05:29.700762     725 scope.go:117] "RemoveContainer" containerID="2d707f4e636f3c33af0939dcc558810386c9490c2aadb8081a1acd6065892e82"
	Oct 18 15:05:29 old-k8s-version-948537 kubelet[725]: E1018 15:05:29.701228     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h786w_kubernetes-dashboard(e5136374-8aee-44ed-af01-888265e276e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w" podUID="e5136374-8aee-44ed-af01-888265e276e1"
	Oct 18 15:05:39 old-k8s-version-948537 kubelet[725]: I1018 15:05:39.486531     725 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 18 15:05:39 old-k8s-version-948537 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 15:05:39 old-k8s-version-948537 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 15:05:39 old-k8s-version-948537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 15:05:39 old-k8s-version-948537 systemd[1]: kubelet.service: Consumed 1.581s CPU time.
	
	
	==> kubernetes-dashboard [ca59ac639c4af3d27021b467cc03eca4d72a3f9c7d8418fc024c78d9006549fe] <==
	2025/10/18 15:05:05 Using namespace: kubernetes-dashboard
	2025/10/18 15:05:05 Using in-cluster config to connect to apiserver
	2025/10/18 15:05:05 Using secret token for csrf signing
	2025/10/18 15:05:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 15:05:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 15:05:05 Successful initial request to the apiserver, version: v1.28.0
	2025/10/18 15:05:05 Generating JWE encryption key
	2025/10/18 15:05:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 15:05:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 15:05:06 Initializing JWE encryption key from synchronized object
	2025/10/18 15:05:06 Creating in-cluster Sidecar client
	2025/10/18 15:05:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 15:05:06 Serving insecurely on HTTP port: 9090
	2025/10/18 15:05:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 15:05:05 Starting overwatch
	
	
	==> storage-provisioner [067057e99a71c35cef6be48c228170e8a97bf712bc8e81bb891f09faeeff93cf] <==
	I1018 15:05:18.941033       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 15:05:18.951825       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 15:05:18.951868       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 15:05:36.400337       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 15:05:36.400418       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ecc713f-94b4-44e1-9a32-99bd38e1b784", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-948537_a22978d7-d3eb-4973-9024-9d54857f0397 became leader
	I1018 15:05:36.400506       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-948537_a22978d7-d3eb-4973-9024-9d54857f0397!
	I1018 15:05:36.501436       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-948537_a22978d7-d3eb-4973-9024-9d54857f0397!
	
	
	==> storage-provisioner [52b03114a7d11a70da29b03a2cdcf4e45d69beb3474365226e6d235c2df948ef] <==
	I1018 15:04:48.117177       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 15:05:18.120727       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-948537 -n old-k8s-version-948537
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-948537 -n old-k8s-version-948537: exit status 2 (395.376103ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-948537 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-948537
helpers_test.go:243: (dbg) docker inspect old-k8s-version-948537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7",
	        "Created": "2025-10-18T15:03:24.489578766Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 331956,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T15:04:37.494464413Z",
	            "FinishedAt": "2025-10-18T15:04:35.693961963Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7/hostname",
	        "HostsPath": "/var/lib/docker/containers/3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7/hosts",
	        "LogPath": "/var/lib/docker/containers/3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7/3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7-json.log",
	        "Name": "/old-k8s-version-948537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-948537:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-948537",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3730ae01e0132e7fb57b36d02a1a9a1f16cc072d1632a2466e3265a49bb485e7",
	                "LowerDir": "/var/lib/docker/overlay2/4e9f5ec3d6a3c3a3e9655b5f49c6cec160d792c920c0ef7bc94b940d3800dbd6-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e9f5ec3d6a3c3a3e9655b5f49c6cec160d792c920c0ef7bc94b940d3800dbd6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e9f5ec3d6a3c3a3e9655b5f49c6cec160d792c920c0ef7bc94b940d3800dbd6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e9f5ec3d6a3c3a3e9655b5f49c6cec160d792c920c0ef7bc94b940d3800dbd6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-948537",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-948537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-948537",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-948537",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-948537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d59c22038a881d06227490cfb017258ab78e228b1ed96a50540d6ef6c22f3050",
	            "SandboxKey": "/var/run/docker/netns/d59c22038a88",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-948537": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:83:f7:70:c5:75",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "61ee9ee46471b491cbfab6422a4dbe2929bd7ab545265cf14dbd822e55ffe7f8",
	                    "EndpointID": "607967f700c84c7d6e0efa47e8698b7a12119bde6d74b6bae18612a1c9344ce8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-948537",
	                        "3730ae01e013"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-948537 -n old-k8s-version-948537
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-948537 -n old-k8s-version-948537: exit status 2 (354.947212ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-948537 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-948537 logs -n 25: (1.16747181s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p NoKubernetes-286873 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-286873       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │                     │
	│ delete  │ -p NoKubernetes-286873                                                                                                                                                                                                                        │ NoKubernetes-286873       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:02 UTC │
	│ start   │ -p cert-options-648086 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-648086       │ jenkins │ v1.37.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:03 UTC │
	│ start   │ -p missing-upgrade-635158 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-635158    │ jenkins │ v1.32.0 │ 18 Oct 25 15:02 UTC │ 18 Oct 25 15:03 UTC │
	│ ssh     │ cert-options-648086 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-648086       │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:03 UTC │
	│ ssh     │ -p cert-options-648086 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-648086       │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:03 UTC │
	│ delete  │ -p cert-options-648086                                                                                                                                                                                                                        │ cert-options-648086       │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:03 UTC │
	│ start   │ -p old-k8s-version-948537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:04 UTC │
	│ start   │ -p missing-upgrade-635158 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-635158    │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:04 UTC │
	│ delete  │ -p missing-upgrade-635158                                                                                                                                                                                                                     │ missing-upgrade-635158    │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ start   │ -p no-preload-165275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165275         │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-948537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │                     │
	│ stop    │ -p old-k8s-version-948537 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-948537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ start   │ -p old-k8s-version-948537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-165275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-165275         │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ stop    │ -p no-preload-165275 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-165275         │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-833162 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-833162 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ delete  │ -p kubernetes-upgrade-833162                                                                                                                                                                                                                  │ kubernetes-upgrade-833162 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ addons  │ enable dashboard -p no-preload-165275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-165275         │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p embed-certs-775590 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-775590        │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ start   │ -p no-preload-165275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165275         │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ image   │ old-k8s-version-948537 image list --format=json                                                                                                                                                                                               │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ pause   │ -p old-k8s-version-948537 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-948537    │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:05:32
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:05:32.322781  340627 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:05:32.322923  340627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:05:32.322932  340627 out.go:374] Setting ErrFile to fd 2...
	I1018 15:05:32.322939  340627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:05:32.323149  340627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:05:32.323593  340627 out.go:368] Setting JSON to false
	I1018 15:05:32.325760  340627 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10083,"bootTime":1760789849,"procs":421,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:05:32.325892  340627 start.go:141] virtualization: kvm guest
	I1018 15:05:32.328173  340627 out.go:179] * [no-preload-165275] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:05:32.330445  340627 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:05:32.330449  340627 notify.go:220] Checking for updates...
	I1018 15:05:32.332989  340627 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:05:32.334297  340627 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:05:32.335637  340627 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 15:05:32.336885  340627 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:05:32.338196  340627 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:05:32.303687  340611 config.go:182] Loaded profile config "cert-expiration-327346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:05:32.303864  340611 config.go:182] Loaded profile config "no-preload-165275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:05:32.303998  340611 config.go:182] Loaded profile config "old-k8s-version-948537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 15:05:32.304137  340611 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:05:32.330760  340611 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:05:32.330956  340611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:05:32.404568  340611 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-18 15:05:32.390432859 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:05:32.404679  340611 docker.go:318] overlay module found
	I1018 15:05:32.406551  340611 out.go:179] * Using the docker driver based on user configuration
	I1018 15:05:32.340051  340627 config.go:182] Loaded profile config "no-preload-165275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:05:32.340766  340627 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:05:32.372446  340627 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:05:32.372613  340627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:05:32.446400  340627 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-18 15:05:32.430766643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:05:32.446560  340627 docker.go:318] overlay module found
	I1018 15:05:32.448513  340627 out.go:179] * Using the docker driver based on existing profile
	I1018 15:05:32.407972  340611 start.go:305] selected driver: docker
	I1018 15:05:32.407994  340611 start.go:925] validating driver "docker" against <nil>
	I1018 15:05:32.408010  340611 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:05:32.408810  340611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:05:32.476523  340611 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-18 15:05:32.464528039 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:05:32.476767  340611 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 15:05:32.477197  340611 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:05:32.479387  340611 out.go:179] * Using Docker driver with root privileges
	I1018 15:05:32.480779  340611 cni.go:84] Creating CNI manager for ""
	I1018 15:05:32.480844  340611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:05:32.480854  340611 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 15:05:32.481006  340611 start.go:349] cluster config:
	{Name:embed-certs-775590 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-775590 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:05:32.482645  340611 out.go:179] * Starting "embed-certs-775590" primary control-plane node in "embed-certs-775590" cluster
	I1018 15:05:32.484017  340611 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:05:32.485478  340611 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:05:32.489049  340611 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:05:32.489109  340611 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 15:05:32.489138  340611 cache.go:58] Caching tarball of preloaded images
	I1018 15:05:32.489158  340611 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 15:05:32.489267  340611 preload.go:233] Found /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 15:05:32.489282  340611 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 15:05:32.489416  340611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/config.json ...
	I1018 15:05:32.489442  340611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/config.json: {Name:mk27b8d43a78442b684da2a96570796e5d767c0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:05:32.512651  340611 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:05:32.512676  340611 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 15:05:32.512696  340611 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:05:32.512731  340611 start.go:360] acquireMachinesLock for embed-certs-775590: {Name:mk7c2e78c8f1aa9ee940b8ae2274718f1467b317 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.512871  340611 start.go:364] duration metric: took 119.281µs to acquireMachinesLock for "embed-certs-775590"
	I1018 15:05:32.512902  340611 start.go:93] Provisioning new machine with config: &{Name:embed-certs-775590 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-775590 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:05:32.513016  340611 start.go:125] createHost starting for "" (driver="docker")
	I1018 15:05:32.449756  340627 start.go:305] selected driver: docker
	I1018 15:05:32.449791  340627 start.go:925] validating driver "docker" against &{Name:no-preload-165275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-165275 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:05:32.449907  340627 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:05:32.450728  340627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:05:32.516771  340627 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-18 15:05:32.50652424 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:05:32.517130  340627 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:05:32.517162  340627 cni.go:84] Creating CNI manager for ""
	I1018 15:05:32.517230  340627 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:05:32.517297  340627 start.go:349] cluster config:
	{Name:no-preload-165275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-165275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:05:32.519011  340627 out.go:179] * Starting "no-preload-165275" primary control-plane node in "no-preload-165275" cluster
	I1018 15:05:32.520160  340627 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:05:32.521409  340627 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:05:32.522517  340627 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:05:32.522612  340627 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 15:05:32.522673  340627 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/config.json ...
	I1018 15:05:32.522840  340627 cache.go:107] acquiring lock: {Name:mkbaa1a4bd6915358a4926d0351a0e021f54346d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.522958  340627 cache.go:107] acquiring lock: {Name:mk314bda0d4e90238c0ed6d4b64ac6d98bf9f0e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.522979  340627 cache.go:107] acquiring lock: {Name:mkd6be508b79cf0b608e0017623eb5fbcb6b5bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.523032  340627 cache.go:115] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 15:05:32.522985  340627 cache.go:107] acquiring lock: {Name:mk1d022df204329fecb8dfdd48f2e6a2af0f3a7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.523046  340627 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 92.816µs
	I1018 15:05:32.523012  340627 cache.go:107] acquiring lock: {Name:mkecab1d576a5cee47304bc15dc72f9970f45c8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.523063  340627 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 15:05:32.523033  340627 cache.go:115] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 15:05:32.522992  340627 cache.go:107] acquiring lock: {Name:mk12de1c820b10b304bb440284c1b6916a987889 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.523079  340627 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 273.398µs
	I1018 15:05:32.523089  340627 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 15:05:32.522840  340627 cache.go:107] acquiring lock: {Name:mk72463510bc510f518ea67b24aec16a4002f6be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.522842  340627 cache.go:107] acquiring lock: {Name:mkcd0e2847def5d7525f56b72d40ef8eb4661666 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.523208  340627 cache.go:115] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 15:05:32.523217  340627 cache.go:115] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 15:05:32.523227  340627 cache.go:115] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1018 15:05:32.523232  340627 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 299.62µs
	I1018 15:05:32.523240  340627 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 330.8µs
	I1018 15:05:32.523238  340627 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 260.583µs
	I1018 15:05:32.523248  340627 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 15:05:32.523250  340627 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 15:05:32.523254  340627 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 15:05:32.523215  340627 cache.go:115] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 15:05:32.523250  340627 cache.go:115] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 15:05:32.523290  340627 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 327.038µs
	I1018 15:05:32.523247  340627 cache.go:115] /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 15:05:32.523302  340627 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 15:05:32.523268  340627 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 486.461µs
	I1018 15:05:32.523317  340627 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 15:05:32.523311  340627 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 499.837µs
	I1018 15:05:32.523340  340627 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 15:05:32.523350  340627 cache.go:87] Successfully saved all images to host disk.
	I1018 15:05:32.544879  340627 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:05:32.544900  340627 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 15:05:32.544927  340627 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:05:32.544961  340627 start.go:360] acquireMachinesLock for no-preload-165275: {Name:mk24a38ac6e4e8fc6cc6d51b67ac49da84578c77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:05:32.545019  340627 start.go:364] duration metric: took 38.591µs to acquireMachinesLock for "no-preload-165275"
	I1018 15:05:32.545046  340627 start.go:96] Skipping create...Using existing machine configuration
	I1018 15:05:32.545053  340627 fix.go:54] fixHost starting: 
	I1018 15:05:32.545299  340627 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:05:32.565386  340627 fix.go:112] recreateIfNeeded on no-preload-165275: state=Stopped err=<nil>
	W1018 15:05:32.565420  340627 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 15:05:32.515935  340611 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 15:05:32.516164  340611 start.go:159] libmachine.API.Create for "embed-certs-775590" (driver="docker")
	I1018 15:05:32.516195  340611 client.go:168] LocalClient.Create starting
	I1018 15:05:32.516257  340611 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem
	I1018 15:05:32.516290  340611 main.go:141] libmachine: Decoding PEM data...
	I1018 15:05:32.516306  340611 main.go:141] libmachine: Parsing certificate...
	I1018 15:05:32.516383  340611 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem
	I1018 15:05:32.516414  340611 main.go:141] libmachine: Decoding PEM data...
	I1018 15:05:32.516432  340611 main.go:141] libmachine: Parsing certificate...
	I1018 15:05:32.516820  340611 cli_runner.go:164] Run: docker network inspect embed-certs-775590 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 15:05:32.534835  340611 cli_runner.go:211] docker network inspect embed-certs-775590 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 15:05:32.534970  340611 network_create.go:284] running [docker network inspect embed-certs-775590] to gather additional debugging logs...
	I1018 15:05:32.535001  340611 cli_runner.go:164] Run: docker network inspect embed-certs-775590
	W1018 15:05:32.553238  340611 cli_runner.go:211] docker network inspect embed-certs-775590 returned with exit code 1
	I1018 15:05:32.553270  340611 network_create.go:287] error running [docker network inspect embed-certs-775590]: docker network inspect embed-certs-775590: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-775590 not found
	I1018 15:05:32.553288  340611 network_create.go:289] output of [docker network inspect embed-certs-775590]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-775590 not found
	
	** /stderr **
	I1018 15:05:32.553417  340611 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:05:32.574465  340611 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-67ded9675d49 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:eb:89:76:0f:a6} reservation:<nil>}
	I1018 15:05:32.575099  340611 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4b365c92bc46 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:db:b6:83:36:75} reservation:<nil>}
	I1018 15:05:32.575699  340611 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ab6063c7cdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:eb:32:cc:ab:b4} reservation:<nil>}
	I1018 15:05:32.576712  340611 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002012720}
	I1018 15:05:32.576740  340611 network_create.go:124] attempt to create docker network embed-certs-775590 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 15:05:32.576796  340611 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-775590 embed-certs-775590
	I1018 15:05:32.637990  340611 network_create.go:108] docker network embed-certs-775590 192.168.76.0/24 created
	I1018 15:05:32.638033  340611 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-775590" container
	I1018 15:05:32.638105  340611 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 15:05:32.657500  340611 cli_runner.go:164] Run: docker volume create embed-certs-775590 --label name.minikube.sigs.k8s.io=embed-certs-775590 --label created_by.minikube.sigs.k8s.io=true
	I1018 15:05:32.678859  340611 oci.go:103] Successfully created a docker volume embed-certs-775590
	I1018 15:05:32.678961  340611 cli_runner.go:164] Run: docker run --rm --name embed-certs-775590-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-775590 --entrypoint /usr/bin/test -v embed-certs-775590:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 15:05:33.102810  340611 oci.go:107] Successfully prepared a docker volume embed-certs-775590
	I1018 15:05:33.102874  340611 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:05:33.102900  340611 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 15:05:33.103021  340611 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-775590:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 15:05:32.568243  340627 out.go:252] * Restarting existing docker container for "no-preload-165275" ...
	I1018 15:05:32.568366  340627 cli_runner.go:164] Run: docker start no-preload-165275
	I1018 15:05:32.847388  340627 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:05:32.867717  340627 kic.go:430] container "no-preload-165275" state is running.
	I1018 15:05:32.868127  340627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-165275
	I1018 15:05:32.888987  340627 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/config.json ...
	I1018 15:05:32.889320  340627 machine.go:93] provisionDockerMachine start ...
	I1018 15:05:32.889410  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:32.909562  340627 main.go:141] libmachine: Using SSH client type: native
	I1018 15:05:32.909884  340627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1018 15:05:32.909907  340627 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:05:32.910690  340627 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50340->127.0.0.1:33068: read: connection reset by peer
	I1018 15:05:36.050301  340627 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-165275
	
	I1018 15:05:36.050337  340627 ubuntu.go:182] provisioning hostname "no-preload-165275"
	I1018 15:05:36.050419  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:36.069176  340627 main.go:141] libmachine: Using SSH client type: native
	I1018 15:05:36.069420  340627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1018 15:05:36.069439  340627 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-165275 && echo "no-preload-165275" | sudo tee /etc/hostname
	I1018 15:05:36.268196  340627 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-165275
	
	I1018 15:05:36.268317  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:36.286657  340627 main.go:141] libmachine: Using SSH client type: native
	I1018 15:05:36.286959  340627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1018 15:05:36.286986  340627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-165275' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-165275/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-165275' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:05:36.422886  340627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:05:36.422931  340627 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 15:05:36.422972  340627 ubuntu.go:190] setting up certificates
	I1018 15:05:36.422986  340627 provision.go:84] configureAuth start
	I1018 15:05:36.423042  340627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-165275
	I1018 15:05:36.443238  340627 provision.go:143] copyHostCerts
	I1018 15:05:36.443303  340627 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem, removing ...
	I1018 15:05:36.443319  340627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:05:36.529692  340627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 15:05:36.529899  340627 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem, removing ...
	I1018 15:05:36.529951  340627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:05:36.530009  340627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 15:05:36.530107  340627 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem, removing ...
	I1018 15:05:36.530120  340627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:05:36.530159  340627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 15:05:36.530240  340627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.no-preload-165275 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-165275]
	I1018 15:05:37.004116  340627 provision.go:177] copyRemoteCerts
	I1018 15:05:37.004211  340627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:05:37.004264  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:37.023055  340627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa Username:docker}
	I1018 15:05:37.120287  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 15:05:37.138000  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 15:05:37.156201  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:05:37.174489  340627 provision.go:87] duration metric: took 751.483239ms to configureAuth
	I1018 15:05:37.174516  340627 ubuntu.go:206] setting minikube options for container-runtime
	I1018 15:05:37.174695  340627 config.go:182] Loaded profile config "no-preload-165275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:05:37.174815  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:37.191936  340627 main.go:141] libmachine: Using SSH client type: native
	I1018 15:05:37.192182  340627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1018 15:05:37.192210  340627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:05:37.643604  340627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:05:37.643633  340627 machine.go:96] duration metric: took 4.754293354s to provisionDockerMachine
	I1018 15:05:37.643648  340627 start.go:293] postStartSetup for "no-preload-165275" (driver="docker")
	I1018 15:05:37.643663  340627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:05:37.643726  340627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:05:37.643781  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:37.664082  340627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa Username:docker}
	I1018 15:05:37.763494  340627 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:05:37.767692  340627 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 15:05:37.767729  340627 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 15:05:37.767743  340627 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/addons for local assets ...
	I1018 15:05:37.767815  340627 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/files for local assets ...
	I1018 15:05:37.767982  340627 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem -> 931872.pem in /etc/ssl/certs
	I1018 15:05:37.768120  340627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:05:37.778213  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:05:37.801259  340627 start.go:296] duration metric: took 157.589139ms for postStartSetup
	I1018 15:05:37.801352  340627 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 15:05:37.801421  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:37.820685  340627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa Username:docker}
	I1018 15:05:37.917124  340627 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 15:05:37.922499  340627 fix.go:56] duration metric: took 5.377436687s for fixHost
	I1018 15:05:37.922532  340627 start.go:83] releasing machines lock for "no-preload-165275", held for 5.377501712s
	I1018 15:05:37.922616  340627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-165275
	I1018 15:05:37.942927  340627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:05:37.942988  340627 ssh_runner.go:195] Run: cat /version.json
	I1018 15:05:37.943012  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:37.943046  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:37.964343  340627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa Username:docker}
	I1018 15:05:37.964343  340627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa Username:docker}
	I1018 15:05:38.124894  340627 ssh_runner.go:195] Run: systemctl --version
	I1018 15:05:38.132145  340627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:05:38.182217  340627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:05:38.188148  340627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:05:38.188224  340627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:05:38.199158  340627 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 15:05:38.199186  340627 start.go:495] detecting cgroup driver to use...
	I1018 15:05:38.199223  340627 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 15:05:38.199284  340627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:05:38.215971  340627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:05:38.236162  340627 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:05:38.236228  340627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:05:38.261251  340627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:05:38.279720  340627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:05:38.388421  340627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:05:38.479449  340627 docker.go:234] disabling docker service ...
	I1018 15:05:38.479521  340627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:05:38.496118  340627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:05:38.510612  340627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:05:38.603398  340627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:05:38.699979  340627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:05:38.713119  340627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:05:38.729827  340627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 15:05:38.729897  340627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:38.739883  340627 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 15:05:38.739965  340627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:38.750794  340627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:38.760252  340627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:38.769483  340627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:05:38.778476  340627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:38.788481  340627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:38.797950  340627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:38.807888  340627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:05:38.818370  340627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:05:38.827393  340627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:05:38.918595  340627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:05:39.032614  340627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:05:39.032692  340627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:05:39.038707  340627 start.go:563] Will wait 60s for crictl version
	I1018 15:05:39.038770  340627 ssh_runner.go:195] Run: which crictl
	I1018 15:05:39.043453  340627 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 15:05:39.073719  340627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 15:05:39.073797  340627 ssh_runner.go:195] Run: crio --version
	I1018 15:05:39.104591  340627 ssh_runner.go:195] Run: crio --version
	I1018 15:05:39.142135  340627 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 15:05:39.143477  340627 cli_runner.go:164] Run: docker network inspect no-preload-165275 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:05:39.165434  340627 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 15:05:39.170203  340627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:05:39.181542  340627 kubeadm.go:883] updating cluster {Name:no-preload-165275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-165275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 15:05:39.181646  340627 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:05:39.181686  340627 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:05:39.221263  340627 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:05:39.221285  340627 cache_images.go:85] Images are preloaded, skipping loading
	I1018 15:05:39.221293  340627 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 15:05:39.221422  340627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-165275 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-165275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 15:05:39.221506  340627 ssh_runner.go:195] Run: crio config
	I1018 15:05:39.271048  340627 cni.go:84] Creating CNI manager for ""
	I1018 15:05:39.271076  340627 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:05:39.271099  340627 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 15:05:39.271137  340627 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-165275 NodeName:no-preload-165275 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 15:05:39.271310  340627 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-165275"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 15:05:39.271386  340627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 15:05:39.279819  340627 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 15:05:39.279890  340627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 15:05:39.287658  340627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 15:05:39.301397  340627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 15:05:39.314486  340627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1018 15:05:39.328119  340627 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 15:05:39.332360  340627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:05:39.342507  340627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:05:39.438176  340627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:05:39.463576  340627 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275 for IP: 192.168.85.2
	I1018 15:05:39.463598  340627 certs.go:195] generating shared ca certs ...
	I1018 15:05:39.463638  340627 certs.go:227] acquiring lock for ca certs: {Name:mk6d3b4e44856f498157352ffe8bb89d2c7a3998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:05:39.463797  340627 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key
	I1018 15:05:39.463844  340627 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key
	I1018 15:05:39.463853  340627 certs.go:257] generating profile certs ...
	I1018 15:05:39.463978  340627 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/client.key
	I1018 15:05:39.464052  340627 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/apiserver.key.93e76921
	I1018 15:05:39.464175  340627 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/proxy-client.key
	I1018 15:05:39.464349  340627 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem (1338 bytes)
	W1018 15:05:39.464400  340627 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187_empty.pem, impossibly tiny 0 bytes
	I1018 15:05:39.464415  340627 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 15:05:39.464449  340627 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem (1082 bytes)
	I1018 15:05:39.464482  340627 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem (1123 bytes)
	I1018 15:05:39.464508  340627 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem (1675 bytes)
	I1018 15:05:39.464562  340627 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:05:39.465524  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 15:05:39.485814  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 15:05:39.508418  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 15:05:39.530869  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 15:05:39.558492  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 15:05:39.581555  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 15:05:39.601305  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 15:05:39.619241  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 15:05:39.637505  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 15:05:39.655288  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem --> /usr/share/ca-certificates/93187.pem (1338 bytes)
	I1018 15:05:39.674118  340627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /usr/share/ca-certificates/931872.pem (1708 bytes)
	I1018 15:05:39.693640  340627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 15:05:39.707282  340627 ssh_runner.go:195] Run: openssl version
	I1018 15:05:39.713363  340627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93187.pem && ln -fs /usr/share/ca-certificates/93187.pem /etc/ssl/certs/93187.pem"
	I1018 15:05:39.722231  340627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93187.pem
	I1018 15:05:39.725949  340627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 14:26 /usr/share/ca-certificates/93187.pem
	I1018 15:05:39.725999  340627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93187.pem
	I1018 15:05:39.763857  340627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93187.pem /etc/ssl/certs/51391683.0"
	I1018 15:05:39.772320  340627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/931872.pem && ln -fs /usr/share/ca-certificates/931872.pem /etc/ssl/certs/931872.pem"
	I1018 15:05:39.780898  340627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/931872.pem
	I1018 15:05:39.784868  340627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 14:26 /usr/share/ca-certificates/931872.pem
	I1018 15:05:39.784947  340627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/931872.pem
	I1018 15:05:39.822389  340627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/931872.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 15:05:39.832272  340627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 15:05:39.843113  340627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:05:39.847388  340627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:05:39.847447  340627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:05:39.887626  340627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 15:05:39.895862  340627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 15:05:39.899716  340627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 15:05:39.936443  340627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 15:05:39.979577  340627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 15:05:40.037866  340627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 15:05:40.087924  340627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 15:05:40.148093  340627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 15:05:40.213284  340627 kubeadm.go:400] StartCluster: {Name:no-preload-165275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-165275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:05:40.213388  340627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 15:05:40.213444  340627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 15:05:40.252230  340627 cri.go:89] found id: "d37cf270acf4cdb482c3d7fdb5fa2e8ecdf544a1b1172db005a424e0b482c119"
	I1018 15:05:40.252255  340627 cri.go:89] found id: "ce5891388244aaa439d0521f1c59f74520a5be8cfe55bae6fec434a5125ea972"
	I1018 15:05:40.252261  340627 cri.go:89] found id: "c1d28d4d24c3ece98b690ada9bd56a5d7ebdd925b9e2320e8f7d9f1b62f77b34"
	I1018 15:05:40.252265  340627 cri.go:89] found id: "3e2c583673b99348bde570e54f1913de407877ce7969439954326ffcf6f4fc31"
	I1018 15:05:40.252364  340627 cri.go:89] found id: ""
	I1018 15:05:40.252420  340627 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 15:05:40.274776  340627 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:05:40Z" level=error msg="open /run/runc: no such file or directory"
	I1018 15:05:40.274886  340627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 15:05:40.285663  340627 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 15:05:40.285683  340627 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 15:05:40.285728  340627 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 15:05:40.295612  340627 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 15:05:40.296455  340627 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-165275" does not appear in /home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:05:40.296980  340627 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-89690/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-165275" cluster setting kubeconfig missing "no-preload-165275" context setting]
	I1018 15:05:40.297754  340627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/kubeconfig: {Name:mkdc6fbc9be8e57447490b30dd5bb0086611876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:05:40.299709  340627 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 15:05:40.309547  340627 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 15:05:40.309578  340627 kubeadm.go:601] duration metric: took 23.888293ms to restartPrimaryControlPlane
	I1018 15:05:40.309587  340627 kubeadm.go:402] duration metric: took 96.31728ms to StartCluster
	I1018 15:05:40.309604  340627 settings.go:142] acquiring lock: {Name:mk9d9d72167811b3554435e137ed17137bf54a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:05:40.309667  340627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:05:40.311107  340627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/kubeconfig: {Name:mkdc6fbc9be8e57447490b30dd5bb0086611876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:05:40.311348  340627 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:05:40.311485  340627 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 15:05:40.311597  340627 config.go:182] Loaded profile config "no-preload-165275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:05:40.311602  340627 addons.go:69] Setting storage-provisioner=true in profile "no-preload-165275"
	I1018 15:05:40.311622  340627 addons.go:238] Setting addon storage-provisioner=true in "no-preload-165275"
	W1018 15:05:40.311630  340627 addons.go:247] addon storage-provisioner should already be in state true
	I1018 15:05:40.311634  340627 addons.go:69] Setting dashboard=true in profile "no-preload-165275"
	I1018 15:05:40.311641  340627 addons.go:69] Setting default-storageclass=true in profile "no-preload-165275"
	I1018 15:05:40.311654  340627 addons.go:238] Setting addon dashboard=true in "no-preload-165275"
	I1018 15:05:40.311654  340627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-165275"
	W1018 15:05:40.311663  340627 addons.go:247] addon dashboard should already be in state true
	I1018 15:05:40.311664  340627 host.go:66] Checking if "no-preload-165275" exists ...
	I1018 15:05:40.311691  340627 host.go:66] Checking if "no-preload-165275" exists ...
	I1018 15:05:40.312013  340627 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:05:40.312182  340627 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:05:40.312205  340627 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:05:40.314684  340627 out.go:179] * Verifying Kubernetes components...
	I1018 15:05:40.315776  340627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:05:40.337955  340627 addons.go:238] Setting addon default-storageclass=true in "no-preload-165275"
	W1018 15:05:40.337981  340627 addons.go:247] addon default-storageclass should already be in state true
	I1018 15:05:40.338010  340627 host.go:66] Checking if "no-preload-165275" exists ...
	I1018 15:05:40.338471  340627 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:05:40.339513  340627 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 15:05:40.341027  340627 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 15:05:40.341044  340627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 15:05:40.341095  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:40.342905  340627 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 15:05:40.344356  340627 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 15:05:37.583764  340611 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-775590:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.480680449s)
	I1018 15:05:37.583796  340611 kic.go:203] duration metric: took 4.480892265s to extract preloaded images to volume ...
	W1018 15:05:37.583893  340611 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 15:05:37.583970  340611 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 15:05:37.584015  340611 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 15:05:37.649279  340611 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-775590 --name embed-certs-775590 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-775590 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-775590 --network embed-certs-775590 --ip 192.168.76.2 --volume embed-certs-775590:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 15:05:37.935219  340611 cli_runner.go:164] Run: docker container inspect embed-certs-775590 --format={{.State.Running}}
	I1018 15:05:37.956184  340611 cli_runner.go:164] Run: docker container inspect embed-certs-775590 --format={{.State.Status}}
	I1018 15:05:37.980042  340611 cli_runner.go:164] Run: docker exec embed-certs-775590 stat /var/lib/dpkg/alternatives/iptables
	I1018 15:05:38.029998  340611 oci.go:144] the created container "embed-certs-775590" has a running status.
	I1018 15:05:38.030041  340611 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/embed-certs-775590/id_rsa...
	I1018 15:05:38.171566  340611 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-89690/.minikube/machines/embed-certs-775590/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 15:05:38.202898  340611 cli_runner.go:164] Run: docker container inspect embed-certs-775590 --format={{.State.Status}}
	I1018 15:05:38.225241  340611 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 15:05:38.225266  340611 kic_runner.go:114] Args: [docker exec --privileged embed-certs-775590 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 15:05:38.281763  340611 cli_runner.go:164] Run: docker container inspect embed-certs-775590 --format={{.State.Status}}
	I1018 15:05:38.302452  340611 machine.go:93] provisionDockerMachine start ...
	I1018 15:05:38.302634  340611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:05:38.332873  340611 main.go:141] libmachine: Using SSH client type: native
	I1018 15:05:38.333272  340611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1018 15:05:38.333294  340611 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:05:38.479153  340611 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-775590
	
	I1018 15:05:38.479189  340611 ubuntu.go:182] provisioning hostname "embed-certs-775590"
	I1018 15:05:38.479256  340611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:05:38.499392  340611 main.go:141] libmachine: Using SSH client type: native
	I1018 15:05:38.499704  340611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1018 15:05:38.499731  340611 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-775590 && echo "embed-certs-775590" | sudo tee /etc/hostname
	I1018 15:05:38.657592  340611 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-775590
	
	I1018 15:05:38.657679  340611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:05:38.677976  340611 main.go:141] libmachine: Using SSH client type: native
	I1018 15:05:38.678265  340611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1018 15:05:38.678292  340611 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-775590' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-775590/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-775590' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:05:38.814788  340611 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:05:38.814829  340611 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 15:05:38.814888  340611 ubuntu.go:190] setting up certificates
	I1018 15:05:38.814902  340611 provision.go:84] configureAuth start
	I1018 15:05:38.814995  340611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-775590
	I1018 15:05:38.837234  340611 provision.go:143] copyHostCerts
	I1018 15:05:38.837390  340611 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem, removing ...
	I1018 15:05:38.837407  340611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:05:38.837488  340611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 15:05:38.837589  340611 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem, removing ...
	I1018 15:05:38.837598  340611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:05:38.837631  340611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 15:05:38.837696  340611 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem, removing ...
	I1018 15:05:38.837705  340611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:05:38.837733  340611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 15:05:38.837790  340611 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.embed-certs-775590 san=[127.0.0.1 192.168.76.2 embed-certs-775590 localhost minikube]
	I1018 15:05:39.023235  340611 provision.go:177] copyRemoteCerts
	I1018 15:05:39.023301  340611 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:05:39.023356  340611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:05:39.045227  340611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/embed-certs-775590/id_rsa Username:docker}
	I1018 15:05:39.147568  340611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:05:39.170895  340611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 15:05:39.189996  340611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 15:05:39.209945  340611 provision.go:87] duration metric: took 394.999837ms to configureAuth
	I1018 15:05:39.209976  340611 ubuntu.go:206] setting minikube options for container-runtime
	I1018 15:05:39.210176  340611 config.go:182] Loaded profile config "embed-certs-775590": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:05:39.210297  340611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:05:39.229897  340611 main.go:141] libmachine: Using SSH client type: native
	I1018 15:05:39.230220  340611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1018 15:05:39.230247  340611 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:05:39.505366  340611 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:05:39.505396  340611 machine.go:96] duration metric: took 1.202918838s to provisionDockerMachine
	I1018 15:05:39.505408  340611 client.go:171] duration metric: took 6.989205013s to LocalClient.Create
	I1018 15:05:39.505432  340611 start.go:167] duration metric: took 6.989268605s to libmachine.API.Create "embed-certs-775590"
	I1018 15:05:39.505442  340611 start.go:293] postStartSetup for "embed-certs-775590" (driver="docker")
	I1018 15:05:39.505456  340611 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:05:39.505543  340611 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:05:39.505600  340611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:05:39.526240  340611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/embed-certs-775590/id_rsa Username:docker}
	I1018 15:05:39.630956  340611 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:05:39.634771  340611 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 15:05:39.634800  340611 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 15:05:39.634812  340611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/addons for local assets ...
	I1018 15:05:39.634867  340611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/files for local assets ...
	I1018 15:05:39.635004  340611 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem -> 931872.pem in /etc/ssl/certs
	I1018 15:05:39.635109  340611 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:05:39.642637  340611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:05:39.663296  340611 start.go:296] duration metric: took 157.840714ms for postStartSetup
	I1018 15:05:39.663615  340611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-775590
	I1018 15:05:39.683718  340611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/config.json ...
	I1018 15:05:39.684021  340611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 15:05:39.684065  340611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:05:39.702196  340611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/embed-certs-775590/id_rsa Username:docker}
	I1018 15:05:39.796902  340611 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 15:05:39.801723  340611 start.go:128] duration metric: took 7.288688955s to createHost
	I1018 15:05:39.801748  340611 start.go:83] releasing machines lock for "embed-certs-775590", held for 7.288862673s
	I1018 15:05:39.801826  340611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-775590
	I1018 15:05:39.819525  340611 ssh_runner.go:195] Run: cat /version.json
	I1018 15:05:39.819554  340611 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:05:39.819571  340611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:05:39.819624  340611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:05:39.838577  340611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/embed-certs-775590/id_rsa Username:docker}
	I1018 15:05:39.839489  340611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/embed-certs-775590/id_rsa Username:docker}
	I1018 15:05:40.009479  340611 ssh_runner.go:195] Run: systemctl --version
	I1018 15:05:40.018349  340611 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:05:40.068862  340611 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:05:40.075811  340611 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:05:40.075965  340611 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:05:40.123707  340611 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 15:05:40.123753  340611 start.go:495] detecting cgroup driver to use...
	I1018 15:05:40.123788  340611 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 15:05:40.123971  340611 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:05:40.154768  340611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:05:40.175094  340611 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:05:40.175180  340611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:05:40.201164  340611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:05:40.224895  340611 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:05:40.350984  340611 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:05:40.503199  340611 docker.go:234] disabling docker service ...
	I1018 15:05:40.503358  340611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:05:40.535706  340611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:05:40.560418  340611 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:05:40.691319  340611 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:05:40.809367  340611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:05:40.826100  340611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:05:40.845820  340611 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 15:05:40.845999  340611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:40.857815  340611 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 15:05:40.857886  340611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:40.870602  340611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:40.883267  340611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:40.902650  340611 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:05:40.912965  340611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:40.922684  340611 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:40.940803  340611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:05:40.952650  340611 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:05:40.963206  340611 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:05:40.975731  340611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:05:41.104233  340611 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:05:41.254742  340611 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:05:41.254815  340611 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:05:41.260028  340611 start.go:563] Will wait 60s for crictl version
	I1018 15:05:41.260099  340611 ssh_runner.go:195] Run: which crictl
	I1018 15:05:41.264817  340611 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 15:05:41.297600  340611 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 15:05:41.297678  340611 ssh_runner.go:195] Run: crio --version
	I1018 15:05:41.335197  340611 ssh_runner.go:195] Run: crio --version
	I1018 15:05:41.375530  340611 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 15:05:41.376723  340611 cli_runner.go:164] Run: docker network inspect embed-certs-775590 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:05:41.398890  340611 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 15:05:41.404299  340611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:05:41.421538  340611 kubeadm.go:883] updating cluster {Name:embed-certs-775590 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-775590 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 15:05:41.421717  340611 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:05:41.421798  340611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:05:41.475399  340611 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:05:41.475426  340611 crio.go:433] Images already preloaded, skipping extraction
	I1018 15:05:41.475483  340611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:05:41.515330  340611 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:05:41.515357  340611 cache_images.go:85] Images are preloaded, skipping loading
	I1018 15:05:41.515368  340611 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 15:05:41.515485  340611 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-775590 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-775590 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 15:05:41.515569  340611 ssh_runner.go:195] Run: crio config
	I1018 15:05:41.582934  340611 cni.go:84] Creating CNI manager for ""
	I1018 15:05:41.582965  340611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:05:41.582990  340611 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 15:05:41.583018  340611 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-775590 NodeName:embed-certs-775590 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 15:05:41.583194  340611 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-775590"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 15:05:41.583269  340611 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 15:05:41.592992  340611 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 15:05:41.593060  340611 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 15:05:41.602679  340611 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 15:05:41.617517  340611 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 15:05:41.637524  340611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 15:05:41.652016  340611 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 15:05:41.657072  340611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:05:41.669406  340611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:05:41.795340  340611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:05:41.818745  340611 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590 for IP: 192.168.76.2
	I1018 15:05:41.818769  340611 certs.go:195] generating shared ca certs ...
	I1018 15:05:41.818790  340611 certs.go:227] acquiring lock for ca certs: {Name:mk6d3b4e44856f498157352ffe8bb89d2c7a3998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:05:41.818985  340611 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key
	I1018 15:05:41.819030  340611 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key
	I1018 15:05:41.819041  340611 certs.go:257] generating profile certs ...
	I1018 15:05:41.819094  340611 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/client.key
	I1018 15:05:41.819115  340611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/client.crt with IP's: []
	I1018 15:05:41.903386  340611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/client.crt ...
	I1018 15:05:41.903419  340611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/client.crt: {Name:mkba84890b93a6c6f757e0bf515c1f509c9fe549 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:05:41.903644  340611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/client.key ...
	I1018 15:05:41.903659  340611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/client.key: {Name:mk9dcfc19c00010d9b95977fe8d53da231f76ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:05:41.903786  340611 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/apiserver.key.608fddd8
	I1018 15:05:41.903801  340611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/apiserver.crt.608fddd8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 15:05:41.971210  340611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/apiserver.crt.608fddd8 ...
	I1018 15:05:41.971236  340611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/apiserver.crt.608fddd8: {Name:mkdce3ac1a0d7026ff7d5e29f38ad1747f9711ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:05:41.971441  340611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/apiserver.key.608fddd8 ...
	I1018 15:05:41.971462  340611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/apiserver.key.608fddd8: {Name:mk450272c48d1a793d7ea32b6b22671b7bb095ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:05:41.971551  340611 certs.go:382] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/apiserver.crt.608fddd8 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/apiserver.crt
	I1018 15:05:41.971628  340611 certs.go:386] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/apiserver.key.608fddd8 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/apiserver.key
	I1018 15:05:41.971695  340611 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/proxy-client.key
	I1018 15:05:41.971716  340611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/proxy-client.crt with IP's: []
	I1018 15:05:42.199735  340611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/proxy-client.crt ...
	I1018 15:05:42.199769  340611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/proxy-client.crt: {Name:mk80919067bd8adffaf07b7b555a5f78e9578d28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:05:42.200012  340611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/proxy-client.key ...
	I1018 15:05:42.200033  340611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/proxy-client.key: {Name:mk9833018da58473b24719df42a7679493dc5b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:05:42.200320  340611 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem (1338 bytes)
	W1018 15:05:42.200379  340611 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187_empty.pem, impossibly tiny 0 bytes
	I1018 15:05:42.200399  340611 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 15:05:42.200429  340611 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem (1082 bytes)
	I1018 15:05:42.200463  340611 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem (1123 bytes)
	I1018 15:05:42.200494  340611 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem (1675 bytes)
	I1018 15:05:42.200560  340611 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:05:42.201363  340611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 15:05:42.246180  340611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 15:05:42.286549  340611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 15:05:40.345594  340627 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 15:05:40.345655  340627 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 15:05:40.345735  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:40.372410  340627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa Username:docker}
	I1018 15:05:40.373670  340627 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 15:05:40.373695  340627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 15:05:40.373758  340627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:05:40.380263  340627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa Username:docker}
	I1018 15:05:40.403004  340627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa Username:docker}
	I1018 15:05:40.505107  340627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:05:40.506008  340627 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 15:05:40.506031  340627 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 15:05:40.519478  340627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 15:05:40.527938  340627 node_ready.go:35] waiting up to 6m0s for node "no-preload-165275" to be "Ready" ...
	I1018 15:05:40.529781  340627 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 15:05:40.529840  340627 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 15:05:40.534578  340627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 15:05:40.556758  340627 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 15:05:40.556792  340627 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 15:05:40.584885  340627 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 15:05:40.584921  340627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 15:05:40.626120  340627 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 15:05:40.626158  340627 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 15:05:40.648147  340627 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 15:05:40.648179  340627 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 15:05:40.666276  340627 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 15:05:40.666303  340627 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 15:05:40.685051  340627 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 15:05:40.685078  340627 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 15:05:40.702880  340627 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 15:05:40.702935  340627 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 15:05:40.718265  340627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 15:05:42.227694  340627 node_ready.go:49] node "no-preload-165275" is "Ready"
	I1018 15:05:42.227746  340627 node_ready.go:38] duration metric: took 1.699772673s for node "no-preload-165275" to be "Ready" ...
	I1018 15:05:42.227766  340627 api_server.go:52] waiting for apiserver process to appear ...
	I1018 15:05:42.227822  340627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:05:42.953207  340627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.418501511s)
	I1018 15:05:42.953393  340627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.235090368s)
	I1018 15:05:42.953443  340627 api_server.go:72] duration metric: took 2.642067028s to wait for apiserver process to appear ...
	I1018 15:05:42.954812  340627 api_server.go:88] waiting for apiserver healthz status ...
	I1018 15:05:42.954840  340627 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 15:05:42.955039  340627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.435525863s)
	I1018 15:05:42.958291  340627 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-165275 addons enable metrics-server
	
	I1018 15:05:42.966829  340627 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 15:05:42.966866  340627 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 15:05:42.971029  340627 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	
	
	==> CRI-O <==
	Oct 18 15:05:05 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:05.549702513Z" level=info msg="Created container ca59ac639c4af3d27021b467cc03eca4d72a3f9c7d8418fc024c78d9006549fe: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fsjwd/kubernetes-dashboard" id=eec66f6f-0020-41b2-8451-4bff97a5dec7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:05:05 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:05.550286736Z" level=info msg="Starting container: ca59ac639c4af3d27021b467cc03eca4d72a3f9c7d8418fc024c78d9006549fe" id=4e0a8c74-dc33-4bf6-90ec-fd038c71a8f2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:05:05 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:05.55217582Z" level=info msg="Started container" PID=1742 containerID=ca59ac639c4af3d27021b467cc03eca4d72a3f9c7d8418fc024c78d9006549fe description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fsjwd/kubernetes-dashboard id=4e0a8c74-dc33-4bf6-90ec-fd038c71a8f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=52832e2207fcd42f0c4d275f1d6a6eb49814e0649b34072ddd43432ab105c8b4
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.874378719Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0b85fa3b-4fa5-4023-8fb7-1a19d39391a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.875337619Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7d7f31fc-e0df-4018-8cfa-3a34d2f2ce86 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.876482003Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=22e2a9e5-223d-42ed-bf63-b346b8e4c6a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.876773778Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.881261475Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.881540567Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a702a0bb435c724b11ca071388b959d60df1e8a255f08da39d54ea27303fed6c/merged/etc/passwd: no such file or directory"
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.881634027Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a702a0bb435c724b11ca071388b959d60df1e8a255f08da39d54ea27303fed6c/merged/etc/group: no such file or directory"
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.881991398Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.924394837Z" level=info msg="Created container 067057e99a71c35cef6be48c228170e8a97bf712bc8e81bb891f09faeeff93cf: kube-system/storage-provisioner/storage-provisioner" id=22e2a9e5-223d-42ed-bf63-b346b8e4c6a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.925144268Z" level=info msg="Starting container: 067057e99a71c35cef6be48c228170e8a97bf712bc8e81bb891f09faeeff93cf" id=e0a2d3a4-6ddf-457f-a424-697c79b20990 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:05:18 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:18.92726014Z" level=info msg="Started container" PID=1770 containerID=067057e99a71c35cef6be48c228170e8a97bf712bc8e81bb891f09faeeff93cf description=kube-system/storage-provisioner/storage-provisioner id=e0a2d3a4-6ddf-457f-a424-697c79b20990 name=/runtime.v1.RuntimeService/StartContainer sandboxID=385161ffc9351d2c6def8a9233a0080eeb73531edddc365b943cd2d5422d9889
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.745878875Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d114dea8-8958-4f96-9698-afdedd02f4e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.746799818Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2a578fe7-fbd2-4e9a-8f8f-5888434a20c7 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.750482413Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w/dashboard-metrics-scraper" id=e3c3771f-41fc-40f4-8f16-255c19c102c0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.750846201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.761457382Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.762097591Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.803004063Z" level=info msg="Created container 2d707f4e636f3c33af0939dcc558810386c9490c2aadb8081a1acd6065892e82: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w/dashboard-metrics-scraper" id=e3c3771f-41fc-40f4-8f16-255c19c102c0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.803771023Z" level=info msg="Starting container: 2d707f4e636f3c33af0939dcc558810386c9490c2aadb8081a1acd6065892e82" id=9dbf9375-5856-4702-bb43-10312d308d16 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.806608736Z" level=info msg="Started container" PID=1785 containerID=2d707f4e636f3c33af0939dcc558810386c9490c2aadb8081a1acd6065892e82 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w/dashboard-metrics-scraper id=9dbf9375-5856-4702-bb43-10312d308d16 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c6079cf274a1ea30a4f60de6c21e4edcfb9bbd35c675c40b8ea1fbc86973d2d
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.889769807Z" level=info msg="Removing container: 44d616f811d03ec4fe2e458514a60eb296291128a62959a31cbf8d5a20cdd3b7" id=49a47c26-6c4e-421e-8eb8-d9c3014525ef name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:05:23 old-k8s-version-948537 crio[562]: time="2025-10-18T15:05:23.901115602Z" level=info msg="Removed container 44d616f811d03ec4fe2e458514a60eb296291128a62959a31cbf8d5a20cdd3b7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w/dashboard-metrics-scraper" id=49a47c26-6c4e-421e-8eb8-d9c3014525ef name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	2d707f4e636f3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   3c6079cf274a1       dashboard-metrics-scraper-5f989dc9cf-h786w       kubernetes-dashboard
	067057e99a71c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago       Running             storage-provisioner         1                   385161ffc9351       storage-provisioner                              kube-system
	ca59ac639c4af       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago       Running             kubernetes-dashboard        0                   52832e2207fcd       kubernetes-dashboard-8694d4445c-fsjwd            kubernetes-dashboard
	6b4b5c46eb7c0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           56 seconds ago       Running             coredns                     0                   214f7f9f0fe13       coredns-5dd5756b68-j8xvf                         kube-system
	fb5d3cda7b7d3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago       Running             busybox                     1                   cf8a166ceeea4       busybox                                          default
	67ecafd74cf06       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago       Running             kindnet-cni                 0                   b07a8572172d8       kindnet-xwd4j                                    kube-system
	f6b23d7900af3       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           56 seconds ago       Running             kube-proxy                  0                   dfc1099c3067b       kube-proxy-kwt74                                 kube-system
	52b03114a7d11       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago       Exited              storage-provisioner         0                   385161ffc9351       storage-provisioner                              kube-system
	66072254c9bf6       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   b30ba162954d0       kube-scheduler-old-k8s-version-948537            kube-system
	44dad120630eb       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   c646ecf8ad549       kube-controller-manager-old-k8s-version-948537   kube-system
	c6c9f1798915d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   08e11699895bd       etcd-old-k8s-version-948537                      kube-system
	851f6b38dcd85       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   7648ee04f4961       kube-apiserver-old-k8s-version-948537            kube-system
	
	
	==> coredns [6b4b5c46eb7c020c11c44ffc6289452f21552a034d98560f814fd10cd937517d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52729 - 43173 "HINFO IN 1963076601915104059.3394339738268485656. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.09531895s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-948537
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-948537
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=old-k8s-version-948537
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_03_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:03:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-948537
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:05:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:05:17 +0000   Sat, 18 Oct 2025 15:03:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:05:17 +0000   Sat, 18 Oct 2025 15:03:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:05:17 +0000   Sat, 18 Oct 2025 15:03:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 15:05:17 +0000   Sat, 18 Oct 2025 15:04:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-948537
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                47943eca-9697-4781-a55f-5b00086edf55
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-5dd5756b68-j8xvf                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-old-k8s-version-948537                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m7s
	  kube-system                 kindnet-xwd4j                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-old-k8s-version-948537             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-controller-manager-old-k8s-version-948537    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-proxy-kwt74                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-old-k8s-version-948537             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-h786w        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-fsjwd             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 113s                   kube-proxy       
	  Normal  Starting                 56s                    kube-proxy       
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-948537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-948537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-948537 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m7s                   kubelet          Node old-k8s-version-948537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m7s                   kubelet          Node old-k8s-version-948537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m7s                   kubelet          Node old-k8s-version-948537 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m7s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           115s                   node-controller  Node old-k8s-version-948537 event: Registered Node old-k8s-version-948537 in Controller
	  Normal  NodeReady                100s                   kubelet          Node old-k8s-version-948537 status is now: NodeReady
	  Normal  Starting                 61s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x9 over 61s)      kubelet          Node old-k8s-version-948537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node old-k8s-version-948537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x7 over 61s)      kubelet          Node old-k8s-version-948537 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                    node-controller  Node old-k8s-version-948537 event: Registered Node old-k8s-version-948537 in Controller
	
	
	==> dmesg <==
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [c6c9f1798915d53f9ebc8eea360ea84ac0d228a2a817fa4a501701022703284a] <==
	{"level":"info","ts":"2025-10-18T15:04:44.33544Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T15:04:44.335515Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T15:04:44.337538Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T15:04:44.338074Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T15:04:44.337694Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-18T15:04:44.338894Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-18T15:04:44.338745Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T15:04:45.825183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T15:04:45.825225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T15:04:45.825255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-18T15:04:45.825267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T15:04:45.825272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-18T15:04:45.82528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-10-18T15:04:45.825287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-18T15:04:45.826163Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-948537 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T15:04:45.826196Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T15:04:45.826185Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T15:04:45.826307Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T15:04:45.826337Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T15:04:45.827985Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-10-18T15:04:45.82841Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-18T15:05:36.619843Z","caller":"traceutil/trace.go:171","msg":"trace[485509943] linearizableReadLoop","detail":"{readStateIndex:696; appliedIndex:695; }","duration":"214.630312ms","start":"2025-10-18T15:05:36.405187Z","end":"2025-10-18T15:05:36.619817Z","steps":["trace[485509943] 'read index received'  (duration: 133.835413ms)","trace[485509943] 'applied index is now lower than readState.Index'  (duration: 80.794144ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T15:05:36.61987Z","caller":"traceutil/trace.go:171","msg":"trace[1343467923] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"216.839382ms","start":"2025-10-18T15:05:36.403003Z","end":"2025-10-18T15:05:36.619842Z","steps":["trace[1343467923] 'process raft request'  (duration: 136.039992ms)","trace[1343467923] 'compare'  (duration: 80.653431ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T15:05:36.620107Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.913224ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1122"}
	{"level":"info","ts":"2025-10-18T15:05:36.620198Z","caller":"traceutil/trace.go:171","msg":"trace[1941750066] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:658; }","duration":"215.024147ms","start":"2025-10-18T15:05:36.405161Z","end":"2025-10-18T15:05:36.620185Z","steps":["trace[1941750066] 'agreement among raft nodes before linearized reading'  (duration: 214.74695ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:05:44 up  2:48,  0 user,  load average: 3.62, 2.83, 1.90
	Linux old-k8s-version-948537 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [67ecafd74cf06e59fa294c1705e72d6c1eee8307b1739175eda1df37d8321210] <==
	I1018 15:04:48.348584       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 15:04:48.348861       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 15:04:48.349041       1 main.go:148] setting mtu 1500 for CNI 
	I1018 15:04:48.349067       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 15:04:48.349092       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T15:04:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 15:04:48.548890       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 15:04:48.549002       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 15:04:48.549039       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 15:04:48.688618       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 15:04:48.889683       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 15:04:48.889734       1 metrics.go:72] Registering metrics
	I1018 15:04:48.890640       1 controller.go:711] "Syncing nftables rules"
	I1018 15:04:58.549703       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:04:58.549802       1 main.go:301] handling current node
	I1018 15:05:08.550410       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:05:08.550456       1 main.go:301] handling current node
	I1018 15:05:18.549020       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:05:18.549077       1 main.go:301] handling current node
	I1018 15:05:28.551045       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:05:28.551099       1 main.go:301] handling current node
	I1018 15:05:38.556063       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:05:38.556099       1 main.go:301] handling current node
	
	
	==> kube-apiserver [851f6b38dcd85d53e129d77afb0ca322c1c82f4dcc331a5606dc1cbaa443e3f6] <==
	I1018 15:04:46.853511       1 shared_informer.go:318] Caches are synced for configmaps
	I1018 15:04:46.853559       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 15:04:46.853578       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1018 15:04:46.853712       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 15:04:46.853808       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 15:04:46.853924       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1018 15:04:46.854705       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 15:04:46.854748       1 aggregator.go:166] initial CRD sync complete...
	I1018 15:04:46.854760       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 15:04:46.854767       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 15:04:46.854788       1 cache.go:39] Caches are synced for autoregister controller
	E1018 15:04:46.858822       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 15:04:46.885851       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1018 15:04:47.689190       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 15:04:47.722775       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 15:04:47.750567       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:04:47.759132       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:04:47.763071       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:04:47.772488       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 15:04:47.833092       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.92.9"}
	I1018 15:04:47.847661       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.244.46"}
	I1018 15:04:59.065827       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 15:04:59.065870       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 15:04:59.278431       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 15:04:59.329348       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [44dad120630eb2d0733b71694fa13433f00c53f74453d3fb34d10d2c5e2c1174] <==
	I1018 15:04:59.378173       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1018 15:04:59.388967       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="297.762784ms"
	I1018 15:04:59.389102       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.74µs"
	I1018 15:04:59.392090       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-h786w"
	I1018 15:04:59.392987       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-fsjwd"
	I1018 15:04:59.399526       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="31.830764ms"
	I1018 15:04:59.400748       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="22.829975ms"
	I1018 15:04:59.406452       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.651517ms"
	I1018 15:04:59.406533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="42.412µs"
	I1018 15:04:59.407789       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="8.221021ms"
	I1018 15:04:59.407863       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="39.528µs"
	I1018 15:04:59.410070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.041µs"
	I1018 15:04:59.418632       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="68.747µs"
	I1018 15:04:59.596140       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 15:04:59.663714       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 15:04:59.663743       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 15:05:02.837391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.482µs"
	I1018 15:05:03.846266       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.186µs"
	I1018 15:05:04.848454       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="112.324µs"
	I1018 15:05:05.864525       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.963902ms"
	I1018 15:05:05.864640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="60.995µs"
	I1018 15:05:23.902096       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.262µs"
	I1018 15:05:25.648440       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.894799ms"
	I1018 15:05:25.648565       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.517µs"
	I1018 15:05:29.711208       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="97.072µs"
	
	
	==> kube-proxy [f6b23d7900af3b31399d5fe6ff8b1e0a4f89b0cb9d8e045f2c6bf85fc2a3c4da] <==
	I1018 15:04:48.145063       1 server_others.go:69] "Using iptables proxy"
	I1018 15:04:48.155222       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1018 15:04:48.175297       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 15:04:48.177664       1 server_others.go:152] "Using iptables Proxier"
	I1018 15:04:48.177702       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 15:04:48.177711       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 15:04:48.177743       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 15:04:48.178051       1 server.go:846] "Version info" version="v1.28.0"
	I1018 15:04:48.178067       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:04:48.178571       1 config.go:97] "Starting endpoint slice config controller"
	I1018 15:04:48.178581       1 config.go:188] "Starting service config controller"
	I1018 15:04:48.178600       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 15:04:48.178602       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 15:04:48.178607       1 config.go:315] "Starting node config controller"
	I1018 15:04:48.178623       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 15:04:48.278869       1 shared_informer.go:318] Caches are synced for service config
	I1018 15:04:48.278898       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1018 15:04:48.278879       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [66072254c9bf69ad4fa0d45670ab4ee9fbc8ac23b9081209ca73e1a08513bb77] <==
	I1018 15:04:44.810013       1 serving.go:348] Generated self-signed cert in-memory
	W1018 15:04:46.795357       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 15:04:46.795485       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 15:04:46.795533       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 15:04:46.795587       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 15:04:46.811315       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1018 15:04:46.811344       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:04:46.812949       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:04:46.812986       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1018 15:04:46.814126       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1018 15:04:46.814162       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1018 15:04:46.913649       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 15:04:59 old-k8s-version-948537 kubelet[725]: I1018 15:04:59.563508     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e5136374-8aee-44ed-af01-888265e276e1-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-h786w\" (UID: \"e5136374-8aee-44ed-af01-888265e276e1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w"
	Oct 18 15:04:59 old-k8s-version-948537 kubelet[725]: I1018 15:04:59.563577     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gpcq\" (UniqueName: \"kubernetes.io/projected/73d6354b-baf5-405e-9584-b844619eb7e4-kube-api-access-9gpcq\") pod \"kubernetes-dashboard-8694d4445c-fsjwd\" (UID: \"73d6354b-baf5-405e-9584-b844619eb7e4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fsjwd"
	Oct 18 15:04:59 old-k8s-version-948537 kubelet[725]: I1018 15:04:59.563779     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/73d6354b-baf5-405e-9584-b844619eb7e4-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-fsjwd\" (UID: \"73d6354b-baf5-405e-9584-b844619eb7e4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fsjwd"
	Oct 18 15:04:59 old-k8s-version-948537 kubelet[725]: I1018 15:04:59.563838     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d954x\" (UniqueName: \"kubernetes.io/projected/e5136374-8aee-44ed-af01-888265e276e1-kube-api-access-d954x\") pod \"dashboard-metrics-scraper-5f989dc9cf-h786w\" (UID: \"e5136374-8aee-44ed-af01-888265e276e1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w"
	Oct 18 15:05:02 old-k8s-version-948537 kubelet[725]: I1018 15:05:02.826771     725 scope.go:117] "RemoveContainer" containerID="bd805eb7955df2416c619e6863711d56ad5d28a983f416cd7798dfd897124e59"
	Oct 18 15:05:03 old-k8s-version-948537 kubelet[725]: I1018 15:05:03.831511     725 scope.go:117] "RemoveContainer" containerID="bd805eb7955df2416c619e6863711d56ad5d28a983f416cd7798dfd897124e59"
	Oct 18 15:05:03 old-k8s-version-948537 kubelet[725]: I1018 15:05:03.831749     725 scope.go:117] "RemoveContainer" containerID="44d616f811d03ec4fe2e458514a60eb296291128a62959a31cbf8d5a20cdd3b7"
	Oct 18 15:05:03 old-k8s-version-948537 kubelet[725]: E1018 15:05:03.832121     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h786w_kubernetes-dashboard(e5136374-8aee-44ed-af01-888265e276e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w" podUID="e5136374-8aee-44ed-af01-888265e276e1"
	Oct 18 15:05:04 old-k8s-version-948537 kubelet[725]: I1018 15:05:04.835497     725 scope.go:117] "RemoveContainer" containerID="44d616f811d03ec4fe2e458514a60eb296291128a62959a31cbf8d5a20cdd3b7"
	Oct 18 15:05:04 old-k8s-version-948537 kubelet[725]: E1018 15:05:04.835879     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h786w_kubernetes-dashboard(e5136374-8aee-44ed-af01-888265e276e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w" podUID="e5136374-8aee-44ed-af01-888265e276e1"
	Oct 18 15:05:05 old-k8s-version-948537 kubelet[725]: I1018 15:05:05.852475     725 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fsjwd" podStartSLOduration=1.056866081 podCreationTimestamp="2025-10-18 15:04:59 +0000 UTC" firstStartedPulling="2025-10-18 15:04:59.724506797 +0000 UTC m=+16.089795274" lastFinishedPulling="2025-10-18 15:05:05.520055919 +0000 UTC m=+21.885344393" observedRunningTime="2025-10-18 15:05:05.852299221 +0000 UTC m=+22.217587705" watchObservedRunningTime="2025-10-18 15:05:05.8524152 +0000 UTC m=+22.217703685"
	Oct 18 15:05:09 old-k8s-version-948537 kubelet[725]: I1018 15:05:09.701004     725 scope.go:117] "RemoveContainer" containerID="44d616f811d03ec4fe2e458514a60eb296291128a62959a31cbf8d5a20cdd3b7"
	Oct 18 15:05:09 old-k8s-version-948537 kubelet[725]: E1018 15:05:09.701461     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h786w_kubernetes-dashboard(e5136374-8aee-44ed-af01-888265e276e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w" podUID="e5136374-8aee-44ed-af01-888265e276e1"
	Oct 18 15:05:18 old-k8s-version-948537 kubelet[725]: I1018 15:05:18.873350     725 scope.go:117] "RemoveContainer" containerID="52b03114a7d11a70da29b03a2cdcf4e45d69beb3474365226e6d235c2df948ef"
	Oct 18 15:05:23 old-k8s-version-948537 kubelet[725]: I1018 15:05:23.744855     725 scope.go:117] "RemoveContainer" containerID="44d616f811d03ec4fe2e458514a60eb296291128a62959a31cbf8d5a20cdd3b7"
	Oct 18 15:05:23 old-k8s-version-948537 kubelet[725]: I1018 15:05:23.888435     725 scope.go:117] "RemoveContainer" containerID="44d616f811d03ec4fe2e458514a60eb296291128a62959a31cbf8d5a20cdd3b7"
	Oct 18 15:05:23 old-k8s-version-948537 kubelet[725]: I1018 15:05:23.888639     725 scope.go:117] "RemoveContainer" containerID="2d707f4e636f3c33af0939dcc558810386c9490c2aadb8081a1acd6065892e82"
	Oct 18 15:05:23 old-k8s-version-948537 kubelet[725]: E1018 15:05:23.889039     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h786w_kubernetes-dashboard(e5136374-8aee-44ed-af01-888265e276e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w" podUID="e5136374-8aee-44ed-af01-888265e276e1"
	Oct 18 15:05:29 old-k8s-version-948537 kubelet[725]: I1018 15:05:29.700762     725 scope.go:117] "RemoveContainer" containerID="2d707f4e636f3c33af0939dcc558810386c9490c2aadb8081a1acd6065892e82"
	Oct 18 15:05:29 old-k8s-version-948537 kubelet[725]: E1018 15:05:29.701228     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h786w_kubernetes-dashboard(e5136374-8aee-44ed-af01-888265e276e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h786w" podUID="e5136374-8aee-44ed-af01-888265e276e1"
	Oct 18 15:05:39 old-k8s-version-948537 kubelet[725]: I1018 15:05:39.486531     725 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 18 15:05:39 old-k8s-version-948537 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 15:05:39 old-k8s-version-948537 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 15:05:39 old-k8s-version-948537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 15:05:39 old-k8s-version-948537 systemd[1]: kubelet.service: Consumed 1.581s CPU time.
	
	
	==> kubernetes-dashboard [ca59ac639c4af3d27021b467cc03eca4d72a3f9c7d8418fc024c78d9006549fe] <==
	2025/10/18 15:05:05 Starting overwatch
	2025/10/18 15:05:05 Using namespace: kubernetes-dashboard
	2025/10/18 15:05:05 Using in-cluster config to connect to apiserver
	2025/10/18 15:05:05 Using secret token for csrf signing
	2025/10/18 15:05:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 15:05:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 15:05:05 Successful initial request to the apiserver, version: v1.28.0
	2025/10/18 15:05:05 Generating JWE encryption key
	2025/10/18 15:05:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 15:05:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 15:05:06 Initializing JWE encryption key from synchronized object
	2025/10/18 15:05:06 Creating in-cluster Sidecar client
	2025/10/18 15:05:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 15:05:06 Serving insecurely on HTTP port: 9090
	2025/10/18 15:05:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [067057e99a71c35cef6be48c228170e8a97bf712bc8e81bb891f09faeeff93cf] <==
	I1018 15:05:18.941033       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 15:05:18.951825       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 15:05:18.951868       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 15:05:36.400337       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 15:05:36.400418       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ecc713f-94b4-44e1-9a32-99bd38e1b784", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-948537_a22978d7-d3eb-4973-9024-9d54857f0397 became leader
	I1018 15:05:36.400506       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-948537_a22978d7-d3eb-4973-9024-9d54857f0397!
	I1018 15:05:36.501436       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-948537_a22978d7-d3eb-4973-9024-9d54857f0397!
	
	
	==> storage-provisioner [52b03114a7d11a70da29b03a2cdcf4e45d69beb3474365226e6d235c2df948ef] <==
	I1018 15:04:48.117177       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 15:05:18.120727       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-948537 -n old-k8s-version-948537
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-948537 -n old-k8s-version-948537: exit status 2 (342.184ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-948537 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-775590 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-775590 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (254.015496ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:06:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-775590 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-775590 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-775590 describe deploy/metrics-server -n kube-system: exit status 1 (58.475268ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-775590 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-775590
helpers_test.go:243: (dbg) docker inspect embed-certs-775590:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136",
	        "Created": "2025-10-18T15:05:37.66682901Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 341970,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T15:05:37.709000758Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136/hosts",
	        "LogPath": "/var/lib/docker/containers/fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136/fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136-json.log",
	        "Name": "/embed-certs-775590",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-775590:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-775590",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136",
	                "LowerDir": "/var/lib/docker/overlay2/0d871763812daaa41f53c8f03bf2ffa741ccbb0d8567e7bd1a04c83675b7f50c-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d871763812daaa41f53c8f03bf2ffa741ccbb0d8567e7bd1a04c83675b7f50c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d871763812daaa41f53c8f03bf2ffa741ccbb0d8567e7bd1a04c83675b7f50c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d871763812daaa41f53c8f03bf2ffa741ccbb0d8567e7bd1a04c83675b7f50c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-775590",
	                "Source": "/var/lib/docker/volumes/embed-certs-775590/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-775590",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-775590",
	                "name.minikube.sigs.k8s.io": "embed-certs-775590",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ad8541825879ea09063f88bd9a65cb87cfb363ae0bb4365a7c49c362b1ac9832",
	            "SandboxKey": "/var/run/docker/netns/ad8541825879",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-775590": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:32:75:fa:a7:e8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4b571e6f85a52c5072615169054e56aacc55a5a837ed83f6fbbd0772adfae9a2",
	                    "EndpointID": "688d940a1f15ae0843bb251477658eda6e83a2e508957dc2dcf9e3b17ddfd8b5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-775590",
	                        "fe1c521b2804"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-775590 -n embed-certs-775590
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-775590 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-775590 logs -n 25: (2.663165206s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p missing-upgrade-635158 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-635158       │ jenkins │ v1.37.0 │ 18 Oct 25 15:03 UTC │ 18 Oct 25 15:04 UTC │
	│ delete  │ -p missing-upgrade-635158                                                                                                                                                                                                                     │ missing-upgrade-635158       │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ start   │ -p no-preload-165275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-948537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │                     │
	│ stop    │ -p old-k8s-version-948537 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-948537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ start   │ -p old-k8s-version-948537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-165275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ stop    │ -p no-preload-165275 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-833162    │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-833162    │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ delete  │ -p kubernetes-upgrade-833162                                                                                                                                                                                                                  │ kubernetes-upgrade-833162    │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ addons  │ enable dashboard -p no-preload-165275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p embed-certs-775590 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p no-preload-165275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ image   │ old-k8s-version-948537 image list --format=json                                                                                                                                                                                               │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ pause   │ -p old-k8s-version-948537 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ delete  │ -p old-k8s-version-948537                                                                                                                                                                                                                     │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ delete  │ -p old-k8s-version-948537                                                                                                                                                                                                                     │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ delete  │ -p disable-driver-mounts-677415                                                                                                                                                                                                               │ disable-driver-mounts-677415 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p default-k8s-diff-port-489104 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ start   │ -p cert-expiration-327346 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-327346       │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ delete  │ -p cert-expiration-327346                                                                                                                                                                                                                     │ cert-expiration-327346       │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p newest-cni-741831 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-775590 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:06:18
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:06:18.990992  352142 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:06:18.991107  352142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:06:18.991114  352142 out.go:374] Setting ErrFile to fd 2...
	I1018 15:06:18.991124  352142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:06:18.991316  352142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:06:18.991810  352142 out.go:368] Setting JSON to false
	I1018 15:06:18.993170  352142 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10130,"bootTime":1760789849,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:06:18.993269  352142 start.go:141] virtualization: kvm guest
	I1018 15:06:18.995348  352142 out.go:179] * [newest-cni-741831] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:06:18.996606  352142 notify.go:220] Checking for updates...
	I1018 15:06:18.996634  352142 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:06:18.997879  352142 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:06:18.999081  352142 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:06:19.000329  352142 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 15:06:19.001580  352142 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:06:19.002773  352142 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:06:19.004542  352142 config.go:182] Loaded profile config "default-k8s-diff-port-489104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:19.004705  352142 config.go:182] Loaded profile config "embed-certs-775590": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:19.004931  352142 config.go:182] Loaded profile config "no-preload-165275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:19.005076  352142 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:06:19.029798  352142 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:06:19.029968  352142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:06:19.087262  352142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 15:06:19.076975606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:06:19.087375  352142 docker.go:318] overlay module found
	I1018 15:06:19.089283  352142 out.go:179] * Using the docker driver based on user configuration
	I1018 15:06:16.235796  347067 addons.go:514] duration metric: took 508.874239ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 15:06:16.564682  347067 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-489104" context rescaled to 1 replicas
	W1018 15:06:18.064585  347067 node_ready.go:57] node "default-k8s-diff-port-489104" has "Ready":"False" status (will retry)
	I1018 15:06:19.090309  352142 start.go:305] selected driver: docker
	I1018 15:06:19.090324  352142 start.go:925] validating driver "docker" against <nil>
	I1018 15:06:19.090335  352142 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:06:19.090980  352142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:06:19.147933  352142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 15:06:19.138241028 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:06:19.148135  352142 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1018 15:06:19.148176  352142 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1018 15:06:19.148433  352142 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 15:06:19.150539  352142 out.go:179] * Using Docker driver with root privileges
	I1018 15:06:19.151779  352142 cni.go:84] Creating CNI manager for ""
	I1018 15:06:19.151848  352142 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:06:19.151872  352142 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 15:06:19.151980  352142 start.go:349] cluster config:
	{Name:newest-cni-741831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-741831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:06:19.153327  352142 out.go:179] * Starting "newest-cni-741831" primary control-plane node in "newest-cni-741831" cluster
	I1018 15:06:19.154334  352142 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:06:19.155556  352142 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:06:19.156744  352142 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:06:19.156787  352142 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 15:06:19.156813  352142 cache.go:58] Caching tarball of preloaded images
	I1018 15:06:19.156868  352142 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 15:06:19.156962  352142 preload.go:233] Found /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 15:06:19.156978  352142 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 15:06:19.157137  352142 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/config.json ...
	I1018 15:06:19.157171  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/config.json: {Name:mkd13aa7acfbed253b9ba5a36cce3dfa1f0aceee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:19.176402  352142 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:06:19.176421  352142 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 15:06:19.176437  352142 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:06:19.176475  352142 start.go:360] acquireMachinesLock for newest-cni-741831: {Name:mk05ea0bcc583fa4b3d237c8091a165605e0fbe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:06:19.176588  352142 start.go:364] duration metric: took 94.483µs to acquireMachinesLock for "newest-cni-741831"
	I1018 15:06:19.176621  352142 start.go:93] Provisioning new machine with config: &{Name:newest-cni-741831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-741831 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:06:19.176710  352142 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 18 15:06:10 embed-certs-775590 crio[800]: time="2025-10-18T15:06:10.762505854Z" level=info msg="Starting container: fe0efd5a70054a3bdac2f7c427e709147d814edc2eb67b3c21c88a7cee5254ab" id=4406aec0-042b-49c8-8283-f530ce94b8d2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:06:10 embed-certs-775590 crio[800]: time="2025-10-18T15:06:10.76474807Z" level=info msg="Started container" PID=1843 containerID=fe0efd5a70054a3bdac2f7c427e709147d814edc2eb67b3c21c88a7cee5254ab description=kube-system/coredns-66bc5c9577-4b6bm/coredns id=4406aec0-042b-49c8-8283-f530ce94b8d2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5310064d52249a6d3dbfdc60f5efe5c592715110473b59fda6df1e008197826c
	Oct 18 15:06:13 embed-certs-775590 crio[800]: time="2025-10-18T15:06:13.246889988Z" level=info msg="Running pod sandbox: default/busybox/POD" id=37b1aca1-a001-4c38-a95a-b52a31a041ac name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:06:13 embed-certs-775590 crio[800]: time="2025-10-18T15:06:13.247022909Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:13 embed-certs-775590 crio[800]: time="2025-10-18T15:06:13.252091679Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:52617e4fa6fdb7570b2c2294796a0e8d7eee6b4a25bb2f8d9621a1a2ada9d69e UID:5580a092-dcd3-46a3-b64b-aef85291de1b NetNS:/var/run/netns/ec57854a-74b2-4baa-9ed5-1e2ff7355005 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009902d8}] Aliases:map[]}"
	Oct 18 15:06:13 embed-certs-775590 crio[800]: time="2025-10-18T15:06:13.252132172Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 15:06:13 embed-certs-775590 crio[800]: time="2025-10-18T15:06:13.26324407Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:52617e4fa6fdb7570b2c2294796a0e8d7eee6b4a25bb2f8d9621a1a2ada9d69e UID:5580a092-dcd3-46a3-b64b-aef85291de1b NetNS:/var/run/netns/ec57854a-74b2-4baa-9ed5-1e2ff7355005 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009902d8}] Aliases:map[]}"
	Oct 18 15:06:13 embed-certs-775590 crio[800]: time="2025-10-18T15:06:13.263379159Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 15:06:13 embed-certs-775590 crio[800]: time="2025-10-18T15:06:13.264139225Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 15:06:13 embed-certs-775590 crio[800]: time="2025-10-18T15:06:13.264894668Z" level=info msg="Ran pod sandbox 52617e4fa6fdb7570b2c2294796a0e8d7eee6b4a25bb2f8d9621a1a2ada9d69e with infra container: default/busybox/POD" id=37b1aca1-a001-4c38-a95a-b52a31a041ac name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:06:13 embed-certs-775590 crio[800]: time="2025-10-18T15:06:13.266169795Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f7abb796-c0ac-484b-b41e-9b93f9ed4180 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:13 embed-certs-775590 crio[800]: time="2025-10-18T15:06:13.266299886Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f7abb796-c0ac-484b-b41e-9b93f9ed4180 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:13 embed-certs-775590 crio[800]: time="2025-10-18T15:06:13.266335551Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f7abb796-c0ac-484b-b41e-9b93f9ed4180 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:13 embed-certs-775590 crio[800]: time="2025-10-18T15:06:13.267125541Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=820c5054-d07f-4857-9edb-2109bfa00de7 name=/runtime.v1.ImageService/PullImage
	Oct 18 15:06:13 embed-certs-775590 crio[800]: time="2025-10-18T15:06:13.271291647Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 15:06:15 embed-certs-775590 crio[800]: time="2025-10-18T15:06:15.501553585Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=820c5054-d07f-4857-9edb-2109bfa00de7 name=/runtime.v1.ImageService/PullImage
	Oct 18 15:06:15 embed-certs-775590 crio[800]: time="2025-10-18T15:06:15.502369599Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=13fbb438-9b1b-4159-8ee4-d7edffece86f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:15 embed-certs-775590 crio[800]: time="2025-10-18T15:06:15.503980943Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d9ad5ffb-9d52-4210-a5d9-1d78fc28af2d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:15 embed-certs-775590 crio[800]: time="2025-10-18T15:06:15.507525486Z" level=info msg="Creating container: default/busybox/busybox" id=82bb7596-9f58-43e3-91e3-fd798a0833b9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:06:15 embed-certs-775590 crio[800]: time="2025-10-18T15:06:15.508447517Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:15 embed-certs-775590 crio[800]: time="2025-10-18T15:06:15.51206312Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:15 embed-certs-775590 crio[800]: time="2025-10-18T15:06:15.512482471Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:15 embed-certs-775590 crio[800]: time="2025-10-18T15:06:15.536738092Z" level=info msg="Created container 87b0316ff356d60667dbe3e6deb800a3b94789e77f5ec4f28fadbc3a59e30d0c: default/busybox/busybox" id=82bb7596-9f58-43e3-91e3-fd798a0833b9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:06:15 embed-certs-775590 crio[800]: time="2025-10-18T15:06:15.537430041Z" level=info msg="Starting container: 87b0316ff356d60667dbe3e6deb800a3b94789e77f5ec4f28fadbc3a59e30d0c" id=992407d5-fed7-4e75-89dd-99bd3b08b580 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:06:15 embed-certs-775590 crio[800]: time="2025-10-18T15:06:15.539676569Z" level=info msg="Started container" PID=1918 containerID=87b0316ff356d60667dbe3e6deb800a3b94789e77f5ec4f28fadbc3a59e30d0c description=default/busybox/busybox id=992407d5-fed7-4e75-89dd-99bd3b08b580 name=/runtime.v1.RuntimeService/StartContainer sandboxID=52617e4fa6fdb7570b2c2294796a0e8d7eee6b4a25bb2f8d9621a1a2ada9d69e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	87b0316ff356d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   52617e4fa6fdb       busybox                                      default
	fe0efd5a70054       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   5310064d52249       coredns-66bc5c9577-4b6bm                     kube-system
	e1a65a853bf79       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   eed472aaaf8d3       storage-provisioner                          kube-system
	e91cb907054f4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   eb490ff93e715       kindnet-nkkwg                                kube-system
	5a3472dcc5640       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   ce2e16afcd388       kube-proxy-clcpk                             kube-system
	e9693c064ba1c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   d5efffc5f41ad       kube-scheduler-embed-certs-775590            kube-system
	abecf97c080b0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   3bef6a76398c7       kube-apiserver-embed-certs-775590            kube-system
	7a7b784514cdf       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   039d5eac017b2       etcd-embed-certs-775590                      kube-system
	f79512ac2ea18       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   d46bda34d97bc       kube-controller-manager-embed-certs-775590   kube-system
	
	
	==> coredns [fe0efd5a70054a3bdac2f7c427e709147d814edc2eb67b3c21c88a7cee5254ab] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46344 - 18556 "HINFO IN 5266284875332757662.1914861495005836504. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.108816348s
	
	
	==> describe nodes <==
	Name:               embed-certs-775590
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-775590
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=embed-certs-775590
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_05_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:05:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-775590
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:06:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:06:10 +0000   Sat, 18 Oct 2025 15:05:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:06:10 +0000   Sat, 18 Oct 2025 15:05:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:06:10 +0000   Sat, 18 Oct 2025 15:05:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 15:06:10 +0000   Sat, 18 Oct 2025 15:06:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-775590
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                df1f36b9-fc29-426b-bde8-96e4a3ead557
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-4b6bm                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-embed-certs-775590                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-nkkwg                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-embed-certs-775590             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-775590    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-clcpk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-embed-certs-775590             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node embed-certs-775590 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node embed-certs-775590 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node embed-certs-775590 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s   node-controller  Node embed-certs-775590 event: Registered Node embed-certs-775590 in Controller
	  Normal  NodeReady                12s   kubelet          Node embed-certs-775590 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [7a7b784514cdf9c0250087c9974e4ef108f5169d116e6bfa41049dda6d2dc2ba] <==
	{"level":"info","ts":"2025-10-18T15:05:54.500529Z","caller":"traceutil/trace.go:172","msg":"trace[1482877368] transaction","detail":"{read_only:false; response_revision:261; number_of_response:1; }","duration":"106.273928ms","start":"2025-10-18T15:05:54.394240Z","end":"2025-10-18T15:05:54.500514Z","steps":["trace[1482877368] 'process raft request'  (duration: 106.244993ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T15:05:54.500618Z","caller":"traceutil/trace.go:172","msg":"trace[1565069478] transaction","detail":"{read_only:false; response_revision:260; number_of_response:1; }","duration":"145.363488ms","start":"2025-10-18T15:05:54.355225Z","end":"2025-10-18T15:05:54.500589Z","steps":["trace[1565069478] 'process raft request'  (duration: 143.251913ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T15:05:54.922285Z","caller":"traceutil/trace.go:172","msg":"trace[1230180030] linearizableReadLoop","detail":"{readStateIndex:270; appliedIndex:270; }","duration":"139.351983ms","start":"2025-10-18T15:05:54.782906Z","end":"2025-10-18T15:05:54.922258Z","steps":["trace[1230180030] 'read index received'  (duration: 139.339624ms)","trace[1230180030] 'applied index is now lower than readState.Index'  (duration: 11.069µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T15:05:54.922469Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.549149ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-775590\" limit:1 ","response":"range_response_count:1 size:6294"}
	{"level":"info","ts":"2025-10-18T15:05:54.922488Z","caller":"traceutil/trace.go:172","msg":"trace[446207385] transaction","detail":"{read_only:false; response_revision:263; number_of_response:1; }","duration":"216.902057ms","start":"2025-10-18T15:05:54.705562Z","end":"2025-10-18T15:05:54.922464Z","steps":["trace[446207385] 'process raft request'  (duration: 216.743546ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T15:05:54.922506Z","caller":"traceutil/trace.go:172","msg":"trace[1846765327] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-embed-certs-775590; range_end:; response_count:1; response_revision:262; }","duration":"139.600257ms","start":"2025-10-18T15:05:54.782896Z","end":"2025-10-18T15:05:54.922496Z","steps":["trace[1846765327] 'agreement among raft nodes before linearized reading'  (duration: 139.457836ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T15:05:54.924999Z","caller":"traceutil/trace.go:172","msg":"trace[651323800] transaction","detail":"{read_only:false; number_of_response:0; response_revision:263; }","duration":"140.508557ms","start":"2025-10-18T15:05:54.784438Z","end":"2025-10-18T15:05:54.924946Z","steps":["trace[651323800] 'process raft request'  (duration: 140.379042ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T15:05:54.925015Z","caller":"traceutil/trace.go:172","msg":"trace[1029158129] transaction","detail":"{read_only:false; number_of_response:0; response_revision:263; }","duration":"140.434111ms","start":"2025-10-18T15:05:54.784569Z","end":"2025-10-18T15:05:54.925004Z","steps":["trace[1029158129] 'process raft request'  (duration: 140.41129ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T15:05:54.925040Z","caller":"traceutil/trace.go:172","msg":"trace[910060895] transaction","detail":"{read_only:false; number_of_response:0; response_revision:263; }","duration":"140.467549ms","start":"2025-10-18T15:05:54.784535Z","end":"2025-10-18T15:05:54.925003Z","steps":["trace[910060895] 'process raft request'  (duration: 140.369628ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T15:05:54.925042Z","caller":"traceutil/trace.go:172","msg":"trace[1998636143] transaction","detail":"{read_only:false; number_of_response:0; response_revision:263; }","duration":"140.481035ms","start":"2025-10-18T15:05:54.784549Z","end":"2025-10-18T15:05:54.925031Z","steps":["trace[1998636143] 'process raft request'  (duration: 140.396225ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T15:05:55.065818Z","caller":"traceutil/trace.go:172","msg":"trace[604204715] transaction","detail":"{read_only:false; response_revision:265; number_of_response:1; }","duration":"110.491425ms","start":"2025-10-18T15:05:54.955305Z","end":"2025-10-18T15:05:55.065796Z","steps":["trace[604204715] 'process raft request'  (duration: 110.44946ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T15:05:55.065873Z","caller":"traceutil/trace.go:172","msg":"trace[1781484335] transaction","detail":"{read_only:false; response_revision:264; number_of_response:1; }","duration":"120.951507ms","start":"2025-10-18T15:05:54.944900Z","end":"2025-10-18T15:05:55.065852Z","steps":["trace[1781484335] 'process raft request'  (duration: 119.887008ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T15:05:55.312726Z","caller":"traceutil/trace.go:172","msg":"trace[2105961500] linearizableReadLoop","detail":"{readStateIndex:283; appliedIndex:283; }","duration":"107.589881ms","start":"2025-10-18T15:05:55.205110Z","end":"2025-10-18T15:05:55.312700Z","steps":["trace[2105961500] 'read index received'  (duration: 107.579985ms)","trace[2105961500] 'applied index is now lower than readState.Index'  (duration: 8.52µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T15:05:55.355121Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.992811ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-10-18T15:05:55.355192Z","caller":"traceutil/trace.go:172","msg":"trace[1885921762] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:271; }","duration":"150.074406ms","start":"2025-10-18T15:05:55.205099Z","end":"2025-10-18T15:05:55.355174Z","steps":["trace[1885921762] 'agreement among raft nodes before linearized reading'  (duration: 107.706185ms)","trace[1885921762] 'range keys from in-memory index tree'  (duration: 42.18823ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T15:05:55.355287Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.705445ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/kindnet\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T15:05:55.355341Z","caller":"traceutil/trace.go:172","msg":"trace[1631058220] range","detail":"{range_begin:/registry/clusterroles/kindnet; range_end:; response_count:0; response_revision:272; }","duration":"135.76728ms","start":"2025-10-18T15:05:55.219561Z","end":"2025-10-18T15:05:55.355329Z","steps":["trace[1631058220] 'agreement among raft nodes before linearized reading'  (duration: 135.678535ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T15:05:55.355488Z","caller":"traceutil/trace.go:172","msg":"trace[1608193484] transaction","detail":"{read_only:false; response_revision:272; number_of_response:1; }","duration":"170.036544ms","start":"2025-10-18T15:05:55.185433Z","end":"2025-10-18T15:05:55.355469Z","steps":["trace[1608193484] 'process raft request'  (duration: 127.308645ms)","trace[1608193484] 'compare'  (duration: 42.38322ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T15:05:55.488318Z","caller":"traceutil/trace.go:172","msg":"trace[1986287671] linearizableReadLoop","detail":"{readStateIndex:285; appliedIndex:285; }","duration":"120.440574ms","start":"2025-10-18T15:05:55.367852Z","end":"2025-10-18T15:05:55.488292Z","steps":["trace[1986287671] 'read index received'  (duration: 120.430413ms)","trace[1986287671] 'applied index is now lower than readState.Index'  (duration: 9.068µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T15:05:55.501743Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.861263ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-cidrs-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T15:05:55.501815Z","caller":"traceutil/trace.go:172","msg":"trace[87304562] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-cidrs-controller; range_end:; response_count:0; response_revision:273; }","duration":"133.959342ms","start":"2025-10-18T15:05:55.367838Z","end":"2025-10-18T15:05:55.501797Z","steps":["trace[87304562] 'agreement among raft nodes before linearized reading'  (duration: 120.536024ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T15:05:55.501910Z","caller":"traceutil/trace.go:172","msg":"trace[1546011670] transaction","detail":"{read_only:false; response_revision:274; number_of_response:1; }","duration":"143.422819ms","start":"2025-10-18T15:05:55.358469Z","end":"2025-10-18T15:05:55.501892Z","steps":["trace[1546011670] 'process raft request'  (duration: 129.872928ms)","trace[1546011670] 'compare'  (duration: 13.395944ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T15:05:55.714890Z","caller":"traceutil/trace.go:172","msg":"trace[319192727] transaction","detail":"{read_only:false; response_revision:278; number_of_response:1; }","duration":"137.719862ms","start":"2025-10-18T15:05:55.577153Z","end":"2025-10-18T15:05:55.714873Z","steps":["trace[319192727] 'process raft request'  (duration: 137.658132ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T15:05:55.714928Z","caller":"traceutil/trace.go:172","msg":"trace[636065896] transaction","detail":"{read_only:false; response_revision:277; number_of_response:1; }","duration":"138.070504ms","start":"2025-10-18T15:05:55.576818Z","end":"2025-10-18T15:05:55.714888Z","steps":["trace[636065896] 'process raft request'  (duration: 92.820771ms)","trace[636065896] 'compare'  (duration: 44.912574ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T15:05:55.846536Z","caller":"traceutil/trace.go:172","msg":"trace[1167052835] transaction","detail":"{read_only:false; response_revision:279; number_of_response:1; }","duration":"120.831504ms","start":"2025-10-18T15:05:55.725689Z","end":"2025-10-18T15:05:55.846521Z","steps":["trace[1167052835] 'process raft request'  (duration: 119.373685ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:06:23 up  2:48,  0 user,  load average: 3.40, 2.90, 1.96
	Linux embed-certs-775590 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e91cb907054f4fae6f6733a257d6b1c0eb974180cd4841c2da474ee5d3a8b3d0] <==
	I1018 15:05:59.701084       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 15:05:59.701601       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 15:05:59.701810       1 main.go:148] setting mtu 1500 for CNI 
	I1018 15:05:59.701865       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 15:05:59.701926       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T15:05:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 15:05:59.998317       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 15:05:59.998426       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 15:05:59.998459       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 15:05:59.998886       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 15:06:00.398653       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 15:06:00.398680       1 metrics.go:72] Registering metrics
	I1018 15:06:00.398744       1 controller.go:711] "Syncing nftables rules"
	I1018 15:06:10.000986       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 15:06:10.001041       1 main.go:301] handling current node
	I1018 15:06:20.001999       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 15:06:20.002034       1 main.go:301] handling current node
	
	
	==> kube-apiserver [abecf97c080b00ebe78d02b372549092093a191f7e5c3b34f168eca590530f68] <==
	I1018 15:05:51.243448       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 15:05:51.252137       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:05:51.253191       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 15:05:51.253403       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 15:05:51.259903       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:05:51.260022       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 15:05:51.260045       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 15:05:52.131103       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 15:05:52.140422       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 15:05:52.140547       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:05:52.773953       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:05:52.816550       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:05:52.936449       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 15:05:52.943841       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 15:05:52.945078       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 15:05:52.951721       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 15:05:53.194696       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 15:05:53.918230       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 15:05:54.074215       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 15:05:54.116089       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 15:05:58.962694       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 15:05:59.060275       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 15:05:59.212484       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:05:59.217083       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1018 15:06:21.048359       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:54578: use of closed network connection
	
	
	==> kube-controller-manager [f79512ac2ea180ed7c9610fc2282d367298eca2cb276a7f0313ab693943375af] <==
	I1018 15:05:58.158121       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 15:05:58.158229       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 15:05:58.159397       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 15:05:58.159414       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 15:05:58.161638       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 15:05:58.162770       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 15:05:58.162786       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 15:05:58.162843       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 15:05:58.165120       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 15:05:58.165170       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 15:05:58.165206       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 15:05:58.165291       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 15:05:58.165298       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 15:05:58.165305       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 15:05:58.167363       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 15:05:58.170655       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 15:05:58.171933       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 15:05:58.173007       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-775590" podCIDRs=["10.244.0.0/24"]
	I1018 15:05:58.175740       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 15:05:58.175874       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 15:05:58.176019       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-775590"
	I1018 15:05:58.176077       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 15:05:58.179131       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 15:05:58.181508       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 15:06:13.177852       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5a3472dcc5640511af8ccc2a152ebb14d396e47b10826856ee5f8627a0ef71ed] <==
	I1018 15:05:59.489307       1 server_linux.go:53] "Using iptables proxy"
	I1018 15:05:59.547534       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 15:05:59.648557       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 15:05:59.648597       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 15:05:59.648710       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 15:05:59.671878       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 15:05:59.671964       1 server_linux.go:132] "Using iptables Proxier"
	I1018 15:05:59.679247       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 15:05:59.679887       1 server.go:527] "Version info" version="v1.34.1"
	I1018 15:05:59.680013       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:05:59.681852       1 config.go:309] "Starting node config controller"
	I1018 15:05:59.682068       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 15:05:59.682512       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 15:05:59.682166       1 config.go:200] "Starting service config controller"
	I1018 15:05:59.682587       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 15:05:59.682176       1 config.go:106] "Starting endpoint slice config controller"
	I1018 15:05:59.682642       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 15:05:59.682843       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 15:05:59.682893       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 15:05:59.783388       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 15:05:59.783420       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 15:05:59.783403       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e9693c064ba1cf7fffe3b6da7f43c0ce00232131395314681e02b7c12544e369] <==
	E1018 15:05:51.196392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 15:05:51.196629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 15:05:51.196652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 15:05:51.196734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 15:05:51.196753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 15:05:51.196753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 15:05:51.198122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 15:05:51.192547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 15:05:52.009234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 15:05:52.040761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 15:05:52.074578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 15:05:52.096643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 15:05:52.196042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 15:05:52.251163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 15:05:52.259229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 15:05:52.270664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 15:05:52.346152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 15:05:52.349219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 15:05:52.360493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 15:05:52.406683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 15:05:52.421031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 15:05:52.436497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 15:05:52.470995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 15:05:52.493653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1018 15:05:54.579938       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 15:05:55 embed-certs-775590 kubelet[1338]: I1018 15:05:55.067644    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-775590" podStartSLOduration=2.067621999 podStartE2EDuration="2.067621999s" podCreationTimestamp="2025-10-18 15:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:05:55.06761784 +0000 UTC m=+1.409671634" watchObservedRunningTime="2025-10-18 15:05:55.067621999 +0000 UTC m=+1.409675785"
	Oct 18 15:05:55 embed-certs-775590 kubelet[1338]: I1018 15:05:55.111088    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-775590" podStartSLOduration=3.111044074 podStartE2EDuration="3.111044074s" podCreationTimestamp="2025-10-18 15:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:05:55.110775366 +0000 UTC m=+1.452829164" watchObservedRunningTime="2025-10-18 15:05:55.111044074 +0000 UTC m=+1.453097850"
	Oct 18 15:05:55 embed-certs-775590 kubelet[1338]: I1018 15:05:55.129147    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-775590" podStartSLOduration=2.129123546 podStartE2EDuration="2.129123546s" podCreationTimestamp="2025-10-18 15:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:05:55.12871937 +0000 UTC m=+1.470773164" watchObservedRunningTime="2025-10-18 15:05:55.129123546 +0000 UTC m=+1.471177338"
	Oct 18 15:05:55 embed-certs-775590 kubelet[1338]: I1018 15:05:55.158931    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-775590" podStartSLOduration=2.158894388 podStartE2EDuration="2.158894388s" podCreationTimestamp="2025-10-18 15:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:05:55.14086874 +0000 UTC m=+1.482922531" watchObservedRunningTime="2025-10-18 15:05:55.158894388 +0000 UTC m=+1.500948184"
	Oct 18 15:05:58 embed-certs-775590 kubelet[1338]: I1018 15:05:58.268216    1338 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 15:05:58 embed-certs-775590 kubelet[1338]: I1018 15:05:58.268898    1338 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 15:05:59 embed-certs-775590 kubelet[1338]: I1018 15:05:59.167502    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fad28e3e-33cd-40c0-834a-348d8aa40044-lib-modules\") pod \"kindnet-nkkwg\" (UID: \"fad28e3e-33cd-40c0-834a-348d8aa40044\") " pod="kube-system/kindnet-nkkwg"
	Oct 18 15:05:59 embed-certs-775590 kubelet[1338]: I1018 15:05:59.167553    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhzg4\" (UniqueName: \"kubernetes.io/projected/fad28e3e-33cd-40c0-834a-348d8aa40044-kube-api-access-xhzg4\") pod \"kindnet-nkkwg\" (UID: \"fad28e3e-33cd-40c0-834a-348d8aa40044\") " pod="kube-system/kindnet-nkkwg"
	Oct 18 15:05:59 embed-certs-775590 kubelet[1338]: I1018 15:05:59.167579    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c9292e6-02d8-4b49-87b4-5291801003a8-lib-modules\") pod \"kube-proxy-clcpk\" (UID: \"2c9292e6-02d8-4b49-87b4-5291801003a8\") " pod="kube-system/kube-proxy-clcpk"
	Oct 18 15:05:59 embed-certs-775590 kubelet[1338]: I1018 15:05:59.167599    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nstw6\" (UniqueName: \"kubernetes.io/projected/2c9292e6-02d8-4b49-87b4-5291801003a8-kube-api-access-nstw6\") pod \"kube-proxy-clcpk\" (UID: \"2c9292e6-02d8-4b49-87b4-5291801003a8\") " pod="kube-system/kube-proxy-clcpk"
	Oct 18 15:05:59 embed-certs-775590 kubelet[1338]: I1018 15:05:59.167633    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fad28e3e-33cd-40c0-834a-348d8aa40044-cni-cfg\") pod \"kindnet-nkkwg\" (UID: \"fad28e3e-33cd-40c0-834a-348d8aa40044\") " pod="kube-system/kindnet-nkkwg"
	Oct 18 15:05:59 embed-certs-775590 kubelet[1338]: I1018 15:05:59.167651    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fad28e3e-33cd-40c0-834a-348d8aa40044-xtables-lock\") pod \"kindnet-nkkwg\" (UID: \"fad28e3e-33cd-40c0-834a-348d8aa40044\") " pod="kube-system/kindnet-nkkwg"
	Oct 18 15:05:59 embed-certs-775590 kubelet[1338]: I1018 15:05:59.167731    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2c9292e6-02d8-4b49-87b4-5291801003a8-kube-proxy\") pod \"kube-proxy-clcpk\" (UID: \"2c9292e6-02d8-4b49-87b4-5291801003a8\") " pod="kube-system/kube-proxy-clcpk"
	Oct 18 15:05:59 embed-certs-775590 kubelet[1338]: I1018 15:05:59.167808    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c9292e6-02d8-4b49-87b4-5291801003a8-xtables-lock\") pod \"kube-proxy-clcpk\" (UID: \"2c9292e6-02d8-4b49-87b4-5291801003a8\") " pod="kube-system/kube-proxy-clcpk"
	Oct 18 15:05:59 embed-certs-775590 kubelet[1338]: I1018 15:05:59.814747    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-nkkwg" podStartSLOduration=0.814724493 podStartE2EDuration="814.724493ms" podCreationTimestamp="2025-10-18 15:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:05:59.814433334 +0000 UTC m=+6.156487150" watchObservedRunningTime="2025-10-18 15:05:59.814724493 +0000 UTC m=+6.156778278"
	Oct 18 15:06:00 embed-certs-775590 kubelet[1338]: I1018 15:06:00.040599    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-clcpk" podStartSLOduration=1.040577071 podStartE2EDuration="1.040577071s" podCreationTimestamp="2025-10-18 15:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:05:59.832019202 +0000 UTC m=+6.174072996" watchObservedRunningTime="2025-10-18 15:06:00.040577071 +0000 UTC m=+6.382630865"
	Oct 18 15:06:10 embed-certs-775590 kubelet[1338]: I1018 15:06:10.337383    1338 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 15:06:10 embed-certs-775590 kubelet[1338]: I1018 15:06:10.446622    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cc3423b2-8f6e-406a-9441-1e44a06f5542-tmp\") pod \"storage-provisioner\" (UID: \"cc3423b2-8f6e-406a-9441-1e44a06f5542\") " pod="kube-system/storage-provisioner"
	Oct 18 15:06:10 embed-certs-775590 kubelet[1338]: I1018 15:06:10.446738    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a8bee21-695c-44b2-8d1c-9f5623ba836f-config-volume\") pod \"coredns-66bc5c9577-4b6bm\" (UID: \"3a8bee21-695c-44b2-8d1c-9f5623ba836f\") " pod="kube-system/coredns-66bc5c9577-4b6bm"
	Oct 18 15:06:10 embed-certs-775590 kubelet[1338]: I1018 15:06:10.446855    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj2dg\" (UniqueName: \"kubernetes.io/projected/cc3423b2-8f6e-406a-9441-1e44a06f5542-kube-api-access-lj2dg\") pod \"storage-provisioner\" (UID: \"cc3423b2-8f6e-406a-9441-1e44a06f5542\") " pod="kube-system/storage-provisioner"
	Oct 18 15:06:10 embed-certs-775590 kubelet[1338]: I1018 15:06:10.446896    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khc6b\" (UniqueName: \"kubernetes.io/projected/3a8bee21-695c-44b2-8d1c-9f5623ba836f-kube-api-access-khc6b\") pod \"coredns-66bc5c9577-4b6bm\" (UID: \"3a8bee21-695c-44b2-8d1c-9f5623ba836f\") " pod="kube-system/coredns-66bc5c9577-4b6bm"
	Oct 18 15:06:10 embed-certs-775590 kubelet[1338]: I1018 15:06:10.838297    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4b6bm" podStartSLOduration=11.838275558 podStartE2EDuration="11.838275558s" podCreationTimestamp="2025-10-18 15:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:06:10.837892297 +0000 UTC m=+17.179946090" watchObservedRunningTime="2025-10-18 15:06:10.838275558 +0000 UTC m=+17.180329352"
	Oct 18 15:06:10 embed-certs-775590 kubelet[1338]: I1018 15:06:10.848084    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=10.848061209 podStartE2EDuration="10.848061209s" podCreationTimestamp="2025-10-18 15:06:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:06:10.847507949 +0000 UTC m=+17.189561762" watchObservedRunningTime="2025-10-18 15:06:10.848061209 +0000 UTC m=+17.190115003"
	Oct 18 15:06:13 embed-certs-775590 kubelet[1338]: I1018 15:06:13.065945    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm8x9\" (UniqueName: \"kubernetes.io/projected/5580a092-dcd3-46a3-b64b-aef85291de1b-kube-api-access-cm8x9\") pod \"busybox\" (UID: \"5580a092-dcd3-46a3-b64b-aef85291de1b\") " pod="default/busybox"
	Oct 18 15:06:15 embed-certs-775590 kubelet[1338]: I1018 15:06:15.855078    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.618278106 podStartE2EDuration="3.855055489s" podCreationTimestamp="2025-10-18 15:06:12 +0000 UTC" firstStartedPulling="2025-10-18 15:06:13.266623989 +0000 UTC m=+19.608677762" lastFinishedPulling="2025-10-18 15:06:15.503401369 +0000 UTC m=+21.845455145" observedRunningTime="2025-10-18 15:06:15.854157573 +0000 UTC m=+22.196211368" watchObservedRunningTime="2025-10-18 15:06:15.855055489 +0000 UTC m=+22.197109265"
	
	
	==> storage-provisioner [e1a65a853bf7935f365d36813f10d01bf9223e6ef14e50fbb96413292abf774c] <==
	I1018 15:06:10.760732       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 15:06:10.771155       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 15:06:10.771204       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 15:06:10.773560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:10.778033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 15:06:10.778217       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 15:06:10.778414       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-775590_8c4b3bb9-0f92-49a5-a224-f7e4e6d573b2!
	I1018 15:06:10.778464       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b555887a-6bab-4008-b93c-f9bed67d8ecd", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-775590_8c4b3bb9-0f92-49a5-a224-f7e4e6d573b2 became leader
	W1018 15:06:10.780445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:10.785384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 15:06:10.879215       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-775590_8c4b3bb9-0f92-49a5-a224-f7e4e6d573b2!
	W1018 15:06:12.791615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:12.796242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:14.799070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:14.803255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:16.808611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:16.815531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:18.818265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:18.822077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:20.825633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:20.830156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:22.833860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:22.894696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-775590 -n embed-certs-775590
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-775590 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-165275 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-165275 --alsologtostderr -v=1: exit status 80 (1.765513819s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-165275 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 15:06:35.272752  355810 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:06:35.272874  355810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:06:35.272885  355810 out.go:374] Setting ErrFile to fd 2...
	I1018 15:06:35.272889  355810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:06:35.273110  355810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:06:35.273353  355810 out.go:368] Setting JSON to false
	I1018 15:06:35.273401  355810 mustload.go:65] Loading cluster: no-preload-165275
	I1018 15:06:35.273733  355810 config.go:182] Loaded profile config "no-preload-165275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:35.274158  355810 cli_runner.go:164] Run: docker container inspect no-preload-165275 --format={{.State.Status}}
	I1018 15:06:35.291487  355810 host.go:66] Checking if "no-preload-165275" exists ...
	I1018 15:06:35.291785  355810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:06:35.354482  355810 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:89 OomKillDisable:false NGoroutines:89 SystemTime:2025-10-18 15:06:35.34415602 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:06:35.355382  355810 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-165275 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 15:06:35.357494  355810 out.go:179] * Pausing node no-preload-165275 ... 
	I1018 15:06:35.358663  355810 host.go:66] Checking if "no-preload-165275" exists ...
	I1018 15:06:35.358969  355810 ssh_runner.go:195] Run: systemctl --version
	I1018 15:06:35.359008  355810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165275
	I1018 15:06:35.377285  355810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/no-preload-165275/id_rsa Username:docker}
	I1018 15:06:35.472997  355810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:06:35.494605  355810 pause.go:52] kubelet running: true
	I1018 15:06:35.494685  355810 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:06:35.676071  355810 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:06:35.676233  355810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:06:35.762832  355810 cri.go:89] found id: "bd99f521b104d52b1eeda1acab34e9e643be51bbb7965e0beed26ac85243b990"
	I1018 15:06:35.762860  355810 cri.go:89] found id: "d8cad0e51da9ba1a5945123306231034e96864d53528c3d0398f4332e290fd40"
	I1018 15:06:35.762867  355810 cri.go:89] found id: "a61bf08741c2071de9d41f7d9a959c9d0202f13a22c5d7343ac7bb3c3b93e5e2"
	I1018 15:06:35.762872  355810 cri.go:89] found id: "f24643699519eede5987d2db64babc61ecbb2bc1fbe89e7d24e540599e9fda2c"
	I1018 15:06:35.762876  355810 cri.go:89] found id: "d8643b50024f13026b83ef70e0a7a12d1d5fc9a309e6bcd49fa11236a78579ff"
	I1018 15:06:35.762881  355810 cri.go:89] found id: "d37cf270acf4cdb482c3d7fdb5fa2e8ecdf544a1b1172db005a424e0b482c119"
	I1018 15:06:35.762886  355810 cri.go:89] found id: "ce5891388244aaa439d0521f1c59f74520a5be8cfe55bae6fec434a5125ea972"
	I1018 15:06:35.762890  355810 cri.go:89] found id: "c1d28d4d24c3ece98b690ada9bd56a5d7ebdd925b9e2320e8f7d9f1b62f77b34"
	I1018 15:06:35.762892  355810 cri.go:89] found id: "3e2c583673b99348bde570e54f1913de407877ce7969439954326ffcf6f4fc31"
	I1018 15:06:35.762898  355810 cri.go:89] found id: "75a93b11eb42f4e75eb477490277c486c91ad8b2a0193107139e8ee97e00ca8c"
	I1018 15:06:35.762901  355810 cri.go:89] found id: "40d37635759ffb9d9f2cb9a03f0e608336ed376a7646906d2e3102badf4b2204"
	I1018 15:06:35.762903  355810 cri.go:89] found id: ""
	I1018 15:06:35.762964  355810 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:06:35.775674  355810 retry.go:31] will retry after 280.93181ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:06:35Z" level=error msg="open /run/runc: no such file or directory"
	I1018 15:06:36.057227  355810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:06:36.072023  355810 pause.go:52] kubelet running: false
	I1018 15:06:36.072108  355810 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:06:36.233867  355810 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:06:36.233964  355810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:06:36.308199  355810 cri.go:89] found id: "bd99f521b104d52b1eeda1acab34e9e643be51bbb7965e0beed26ac85243b990"
	I1018 15:06:36.308228  355810 cri.go:89] found id: "d8cad0e51da9ba1a5945123306231034e96864d53528c3d0398f4332e290fd40"
	I1018 15:06:36.308235  355810 cri.go:89] found id: "a61bf08741c2071de9d41f7d9a959c9d0202f13a22c5d7343ac7bb3c3b93e5e2"
	I1018 15:06:36.308240  355810 cri.go:89] found id: "f24643699519eede5987d2db64babc61ecbb2bc1fbe89e7d24e540599e9fda2c"
	I1018 15:06:36.308253  355810 cri.go:89] found id: "d8643b50024f13026b83ef70e0a7a12d1d5fc9a309e6bcd49fa11236a78579ff"
	I1018 15:06:36.308258  355810 cri.go:89] found id: "d37cf270acf4cdb482c3d7fdb5fa2e8ecdf544a1b1172db005a424e0b482c119"
	I1018 15:06:36.308262  355810 cri.go:89] found id: "ce5891388244aaa439d0521f1c59f74520a5be8cfe55bae6fec434a5125ea972"
	I1018 15:06:36.308266  355810 cri.go:89] found id: "c1d28d4d24c3ece98b690ada9bd56a5d7ebdd925b9e2320e8f7d9f1b62f77b34"
	I1018 15:06:36.308270  355810 cri.go:89] found id: "3e2c583673b99348bde570e54f1913de407877ce7969439954326ffcf6f4fc31"
	I1018 15:06:36.308279  355810 cri.go:89] found id: "75a93b11eb42f4e75eb477490277c486c91ad8b2a0193107139e8ee97e00ca8c"
	I1018 15:06:36.308286  355810 cri.go:89] found id: "40d37635759ffb9d9f2cb9a03f0e608336ed376a7646906d2e3102badf4b2204"
	I1018 15:06:36.308290  355810 cri.go:89] found id: ""
	I1018 15:06:36.308339  355810 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:06:36.321228  355810 retry.go:31] will retry after 346.733495ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:06:36Z" level=error msg="open /run/runc: no such file or directory"
	I1018 15:06:36.668798  355810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:06:36.684938  355810 pause.go:52] kubelet running: false
	I1018 15:06:36.685003  355810 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:06:36.879179  355810 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:06:36.879265  355810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:06:36.962965  355810 cri.go:89] found id: "bd99f521b104d52b1eeda1acab34e9e643be51bbb7965e0beed26ac85243b990"
	I1018 15:06:36.962987  355810 cri.go:89] found id: "d8cad0e51da9ba1a5945123306231034e96864d53528c3d0398f4332e290fd40"
	I1018 15:06:36.962991  355810 cri.go:89] found id: "a61bf08741c2071de9d41f7d9a959c9d0202f13a22c5d7343ac7bb3c3b93e5e2"
	I1018 15:06:36.962994  355810 cri.go:89] found id: "f24643699519eede5987d2db64babc61ecbb2bc1fbe89e7d24e540599e9fda2c"
	I1018 15:06:36.962996  355810 cri.go:89] found id: "d8643b50024f13026b83ef70e0a7a12d1d5fc9a309e6bcd49fa11236a78579ff"
	I1018 15:06:36.963000  355810 cri.go:89] found id: "d37cf270acf4cdb482c3d7fdb5fa2e8ecdf544a1b1172db005a424e0b482c119"
	I1018 15:06:36.963005  355810 cri.go:89] found id: "ce5891388244aaa439d0521f1c59f74520a5be8cfe55bae6fec434a5125ea972"
	I1018 15:06:36.963009  355810 cri.go:89] found id: "c1d28d4d24c3ece98b690ada9bd56a5d7ebdd925b9e2320e8f7d9f1b62f77b34"
	I1018 15:06:36.963013  355810 cri.go:89] found id: "3e2c583673b99348bde570e54f1913de407877ce7969439954326ffcf6f4fc31"
	I1018 15:06:36.963031  355810 cri.go:89] found id: "75a93b11eb42f4e75eb477490277c486c91ad8b2a0193107139e8ee97e00ca8c"
	I1018 15:06:36.963036  355810 cri.go:89] found id: "40d37635759ffb9d9f2cb9a03f0e608336ed376a7646906d2e3102badf4b2204"
	I1018 15:06:36.963040  355810 cri.go:89] found id: ""
	I1018 15:06:36.963084  355810 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:06:36.978048  355810 out.go:203] 
	W1018 15:06:36.979401  355810 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:06:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:06:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 15:06:36.979419  355810 out.go:285] * 
	* 
	W1018 15:06:36.985163  355810 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 15:06:36.987066  355810 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-165275 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-165275
helpers_test.go:243: (dbg) docker inspect no-preload-165275:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06",
	        "Created": "2025-10-18T15:04:14.174636016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 341047,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T15:05:32.595965677Z",
	            "FinishedAt": "2025-10-18T15:05:31.463575696Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06/hostname",
	        "HostsPath": "/var/lib/docker/containers/aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06/hosts",
	        "LogPath": "/var/lib/docker/containers/aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06/aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06-json.log",
	        "Name": "/no-preload-165275",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-165275:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-165275",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06",
	                "LowerDir": "/var/lib/docker/overlay2/416402f5dd03bf1ed780ad68ce499c08cfc04a7c16ada619f14034f104f2e745-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/416402f5dd03bf1ed780ad68ce499c08cfc04a7c16ada619f14034f104f2e745/merged",
	                "UpperDir": "/var/lib/docker/overlay2/416402f5dd03bf1ed780ad68ce499c08cfc04a7c16ada619f14034f104f2e745/diff",
	                "WorkDir": "/var/lib/docker/overlay2/416402f5dd03bf1ed780ad68ce499c08cfc04a7c16ada619f14034f104f2e745/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-165275",
	                "Source": "/var/lib/docker/volumes/no-preload-165275/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-165275",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-165275",
	                "name.minikube.sigs.k8s.io": "no-preload-165275",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f17c16ab34f345abf9f30d9f39da3075239d747de88dc57cd0c8f8a84e03442",
	            "SandboxKey": "/var/run/docker/netns/8f17c16ab34f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-165275": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:8d:9d:5d:8a:fa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2decf6b0e9a2edffe7ff29802fe30453af810cd2279b900d48c499fda7236039",
	                    "EndpointID": "decb1b7a47fe613d6c395754ce37b39c788201facac8b0fac4c65463d8400028",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-165275",
	                        "aa996275db3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-165275 -n no-preload-165275
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-165275 -n no-preload-165275: exit status 2 (333.760346ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-165275 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-165275 logs -n 25: (1.190043051s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p old-k8s-version-948537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │                     │
	│ stop    │ -p old-k8s-version-948537 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-948537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ start   │ -p old-k8s-version-948537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-165275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ stop    │ -p no-preload-165275 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-833162    │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-833162    │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ delete  │ -p kubernetes-upgrade-833162                                                                                                                                                                                                                  │ kubernetes-upgrade-833162    │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ addons  │ enable dashboard -p no-preload-165275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p embed-certs-775590 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p no-preload-165275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:06 UTC │
	│ image   │ old-k8s-version-948537 image list --format=json                                                                                                                                                                                               │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ pause   │ -p old-k8s-version-948537 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ delete  │ -p old-k8s-version-948537                                                                                                                                                                                                                     │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ delete  │ -p old-k8s-version-948537                                                                                                                                                                                                                     │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ delete  │ -p disable-driver-mounts-677415                                                                                                                                                                                                               │ disable-driver-mounts-677415 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p default-k8s-diff-port-489104 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p cert-expiration-327346 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-327346       │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ delete  │ -p cert-expiration-327346                                                                                                                                                                                                                     │ cert-expiration-327346       │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p newest-cni-741831 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-775590 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ stop    │ -p embed-certs-775590 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ image   │ no-preload-165275 image list --format=json                                                                                                                                                                                                    │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ pause   │ -p no-preload-165275 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:06:18
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:06:18.990992  352142 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:06:18.991107  352142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:06:18.991114  352142 out.go:374] Setting ErrFile to fd 2...
	I1018 15:06:18.991124  352142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:06:18.991316  352142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:06:18.991810  352142 out.go:368] Setting JSON to false
	I1018 15:06:18.993170  352142 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10130,"bootTime":1760789849,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:06:18.993269  352142 start.go:141] virtualization: kvm guest
	I1018 15:06:18.995348  352142 out.go:179] * [newest-cni-741831] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:06:18.996606  352142 notify.go:220] Checking for updates...
	I1018 15:06:18.996634  352142 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:06:18.997879  352142 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:06:18.999081  352142 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:06:19.000329  352142 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 15:06:19.001580  352142 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:06:19.002773  352142 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:06:19.004542  352142 config.go:182] Loaded profile config "default-k8s-diff-port-489104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:19.004705  352142 config.go:182] Loaded profile config "embed-certs-775590": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:19.004931  352142 config.go:182] Loaded profile config "no-preload-165275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:19.005076  352142 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:06:19.029798  352142 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:06:19.029968  352142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:06:19.087262  352142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 15:06:19.076975606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:06:19.087375  352142 docker.go:318] overlay module found
	I1018 15:06:19.089283  352142 out.go:179] * Using the docker driver based on user configuration
	I1018 15:06:16.235796  347067 addons.go:514] duration metric: took 508.874239ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 15:06:16.564682  347067 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-489104" context rescaled to 1 replicas
	W1018 15:06:18.064585  347067 node_ready.go:57] node "default-k8s-diff-port-489104" has "Ready":"False" status (will retry)
	I1018 15:06:19.090309  352142 start.go:305] selected driver: docker
	I1018 15:06:19.090324  352142 start.go:925] validating driver "docker" against <nil>
	I1018 15:06:19.090335  352142 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:06:19.090980  352142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:06:19.147933  352142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 15:06:19.138241028 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:06:19.148135  352142 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1018 15:06:19.148176  352142 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1018 15:06:19.148433  352142 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 15:06:19.150539  352142 out.go:179] * Using Docker driver with root privileges
	I1018 15:06:19.151779  352142 cni.go:84] Creating CNI manager for ""
	I1018 15:06:19.151848  352142 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:06:19.151872  352142 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 15:06:19.151980  352142 start.go:349] cluster config:
	{Name:newest-cni-741831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-741831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:06:19.153327  352142 out.go:179] * Starting "newest-cni-741831" primary control-plane node in "newest-cni-741831" cluster
	I1018 15:06:19.154334  352142 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:06:19.155556  352142 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:06:19.156744  352142 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:06:19.156787  352142 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 15:06:19.156813  352142 cache.go:58] Caching tarball of preloaded images
	I1018 15:06:19.156868  352142 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 15:06:19.156962  352142 preload.go:233] Found /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 15:06:19.156978  352142 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 15:06:19.157137  352142 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/config.json ...
	I1018 15:06:19.157171  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/config.json: {Name:mkd13aa7acfbed253b9ba5a36cce3dfa1f0aceee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:19.176402  352142 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:06:19.176421  352142 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 15:06:19.176437  352142 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:06:19.176475  352142 start.go:360] acquireMachinesLock for newest-cni-741831: {Name:mk05ea0bcc583fa4b3d237c8091a165605e0fbe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:06:19.176588  352142 start.go:364] duration metric: took 94.483µs to acquireMachinesLock for "newest-cni-741831"
	I1018 15:06:19.176621  352142 start.go:93] Provisioning new machine with config: &{Name:newest-cni-741831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-741831 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:06:19.176710  352142 start.go:125] createHost starting for "" (driver="docker")
	W1018 15:06:18.006003  340627 pod_ready.go:104] pod "coredns-66bc5c9577-cmgb8" is not "Ready", error: <nil>
	W1018 15:06:20.007350  340627 pod_ready.go:104] pod "coredns-66bc5c9577-cmgb8" is not "Ready", error: <nil>
	I1018 15:06:22.007480  340627 pod_ready.go:94] pod "coredns-66bc5c9577-cmgb8" is "Ready"
	I1018 15:06:22.007513  340627 pod_ready.go:86] duration metric: took 38.506906838s for pod "coredns-66bc5c9577-cmgb8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.010793  340627 pod_ready.go:83] waiting for pod "etcd-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.015586  340627 pod_ready.go:94] pod "etcd-no-preload-165275" is "Ready"
	I1018 15:06:22.015617  340627 pod_ready.go:86] duration metric: took 4.797501ms for pod "etcd-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.018019  340627 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.022342  340627 pod_ready.go:94] pod "kube-apiserver-no-preload-165275" is "Ready"
	I1018 15:06:22.022370  340627 pod_ready.go:86] duration metric: took 4.328879ms for pod "kube-apiserver-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.024547  340627 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.205070  340627 pod_ready.go:94] pod "kube-controller-manager-no-preload-165275" is "Ready"
	I1018 15:06:22.205105  340627 pod_ready.go:86] duration metric: took 180.535874ms for pod "kube-controller-manager-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.405341  340627 pod_ready.go:83] waiting for pod "kube-proxy-84fhl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.804708  340627 pod_ready.go:94] pod "kube-proxy-84fhl" is "Ready"
	I1018 15:06:22.804737  340627 pod_ready.go:86] duration metric: took 399.364412ms for pod "kube-proxy-84fhl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:23.009439  340627 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:23.405543  340627 pod_ready.go:94] pod "kube-scheduler-no-preload-165275" is "Ready"
	I1018 15:06:23.405574  340627 pod_ready.go:86] duration metric: took 396.107038ms for pod "kube-scheduler-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:23.405589  340627 pod_ready.go:40] duration metric: took 39.908960633s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:06:23.451163  340627 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:06:23.547653  340627 out.go:179] * Done! kubectl is now configured to use "no-preload-165275" cluster and "default" namespace by default
	I1018 15:06:19.178580  352142 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 15:06:19.178837  352142 start.go:159] libmachine.API.Create for "newest-cni-741831" (driver="docker")
	I1018 15:06:19.178873  352142 client.go:168] LocalClient.Create starting
	I1018 15:06:19.179005  352142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem
	I1018 15:06:19.179061  352142 main.go:141] libmachine: Decoding PEM data...
	I1018 15:06:19.179076  352142 main.go:141] libmachine: Parsing certificate...
	I1018 15:06:19.179132  352142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem
	I1018 15:06:19.179155  352142 main.go:141] libmachine: Decoding PEM data...
	I1018 15:06:19.179164  352142 main.go:141] libmachine: Parsing certificate...
	I1018 15:06:19.179501  352142 cli_runner.go:164] Run: docker network inspect newest-cni-741831 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 15:06:19.196543  352142 cli_runner.go:211] docker network inspect newest-cni-741831 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 15:06:19.196640  352142 network_create.go:284] running [docker network inspect newest-cni-741831] to gather additional debugging logs...
	I1018 15:06:19.196663  352142 cli_runner.go:164] Run: docker network inspect newest-cni-741831
	W1018 15:06:19.213085  352142 cli_runner.go:211] docker network inspect newest-cni-741831 returned with exit code 1
	I1018 15:06:19.213136  352142 network_create.go:287] error running [docker network inspect newest-cni-741831]: docker network inspect newest-cni-741831: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-741831 not found
	I1018 15:06:19.213172  352142 network_create.go:289] output of [docker network inspect newest-cni-741831]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-741831 not found
	
	** /stderr **
	I1018 15:06:19.213347  352142 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:06:19.230587  352142 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-67ded9675d49 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:eb:89:76:0f:a6} reservation:<nil>}
	I1018 15:06:19.231147  352142 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4b365c92bc46 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:db:b6:83:36:75} reservation:<nil>}
	I1018 15:06:19.231748  352142 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ab6063c7cdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:eb:32:cc:ab:b4} reservation:<nil>}
	I1018 15:06:19.232375  352142 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4b571e6f85a5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:35:91:99:08:5b} reservation:<nil>}
	I1018 15:06:19.232993  352142 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-2decf6b0e9a2 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:32:85:60:59:11:56} reservation:<nil>}
	I1018 15:06:19.233747  352142 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f38de0}
	I1018 15:06:19.233775  352142 network_create.go:124] attempt to create docker network newest-cni-741831 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1018 15:06:19.233823  352142 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-741831 newest-cni-741831
	I1018 15:06:19.295382  352142 network_create.go:108] docker network newest-cni-741831 192.168.94.0/24 created
	I1018 15:06:19.295424  352142 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-741831" container
	I1018 15:06:19.295490  352142 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 15:06:19.312794  352142 cli_runner.go:164] Run: docker volume create newest-cni-741831 --label name.minikube.sigs.k8s.io=newest-cni-741831 --label created_by.minikube.sigs.k8s.io=true
	I1018 15:06:19.332326  352142 oci.go:103] Successfully created a docker volume newest-cni-741831
	I1018 15:06:19.332413  352142 cli_runner.go:164] Run: docker run --rm --name newest-cni-741831-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-741831 --entrypoint /usr/bin/test -v newest-cni-741831:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 15:06:19.734788  352142 oci.go:107] Successfully prepared a docker volume newest-cni-741831
	I1018 15:06:19.734843  352142 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:06:19.734868  352142 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 15:06:19.734956  352142 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-741831:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1018 15:06:20.564874  347067 node_ready.go:57] node "default-k8s-diff-port-489104" has "Ready":"False" status (will retry)
	W1018 15:06:22.565092  347067 node_ready.go:57] node "default-k8s-diff-port-489104" has "Ready":"False" status (will retry)
	I1018 15:06:24.339197  352142 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-741831:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.604197145s)
	I1018 15:06:24.339229  352142 kic.go:203] duration metric: took 4.604355206s to extract preloaded images to volume ...
	W1018 15:06:24.339333  352142 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 15:06:24.339364  352142 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 15:06:24.339401  352142 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 15:06:24.406366  352142 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-741831 --name newest-cni-741831 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-741831 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-741831 --network newest-cni-741831 --ip 192.168.94.2 --volume newest-cni-741831:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 15:06:24.727314  352142 cli_runner.go:164] Run: docker container inspect newest-cni-741831 --format={{.State.Running}}
	I1018 15:06:24.750170  352142 cli_runner.go:164] Run: docker container inspect newest-cni-741831 --format={{.State.Status}}
	I1018 15:06:24.774109  352142 cli_runner.go:164] Run: docker exec newest-cni-741831 stat /var/lib/dpkg/alternatives/iptables
	I1018 15:06:24.826218  352142 oci.go:144] the created container "newest-cni-741831" has a running status.
	I1018 15:06:24.826247  352142 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa...
	I1018 15:06:25.591975  352142 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 15:06:25.618152  352142 cli_runner.go:164] Run: docker container inspect newest-cni-741831 --format={{.State.Status}}
	I1018 15:06:25.635630  352142 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 15:06:25.635652  352142 kic_runner.go:114] Args: [docker exec --privileged newest-cni-741831 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 15:06:25.683939  352142 cli_runner.go:164] Run: docker container inspect newest-cni-741831 --format={{.State.Status}}
	I1018 15:06:25.701188  352142 machine.go:93] provisionDockerMachine start ...
	I1018 15:06:25.701290  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:25.719680  352142 main.go:141] libmachine: Using SSH client type: native
	I1018 15:06:25.720029  352142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1018 15:06:25.720060  352142 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:06:25.854071  352142 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-741831
	
	I1018 15:06:25.854106  352142 ubuntu.go:182] provisioning hostname "newest-cni-741831"
	I1018 15:06:25.854160  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:25.872062  352142 main.go:141] libmachine: Using SSH client type: native
	I1018 15:06:25.872341  352142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1018 15:06:25.872365  352142 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-741831 && echo "newest-cni-741831" | sudo tee /etc/hostname
	I1018 15:06:26.015459  352142 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-741831
	
	I1018 15:06:26.015545  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:26.033766  352142 main.go:141] libmachine: Using SSH client type: native
	I1018 15:06:26.034053  352142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1018 15:06:26.034076  352142 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-741831' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-741831/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-741831' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:06:26.171352  352142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:06:26.171386  352142 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 15:06:26.171426  352142 ubuntu.go:190] setting up certificates
	I1018 15:06:26.171441  352142 provision.go:84] configureAuth start
	I1018 15:06:26.171503  352142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-741831
	I1018 15:06:26.190241  352142 provision.go:143] copyHostCerts
	I1018 15:06:26.190312  352142 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem, removing ...
	I1018 15:06:26.190325  352142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:06:26.190406  352142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 15:06:26.190521  352142 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem, removing ...
	I1018 15:06:26.190537  352142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:06:26.190580  352142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 15:06:26.190670  352142 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem, removing ...
	I1018 15:06:26.190681  352142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:06:26.190722  352142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 15:06:26.190798  352142 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.newest-cni-741831 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-741831]
	I1018 15:06:26.528284  352142 provision.go:177] copyRemoteCerts
	I1018 15:06:26.528341  352142 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:06:26.528375  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:26.546905  352142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:26.644596  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:06:26.665034  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 15:06:26.683543  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 15:06:26.701141  352142 provision.go:87] duration metric: took 529.670696ms to configureAuth
	I1018 15:06:26.701174  352142 ubuntu.go:206] setting minikube options for container-runtime
	I1018 15:06:26.701364  352142 config.go:182] Loaded profile config "newest-cni-741831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:26.701496  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:26.719555  352142 main.go:141] libmachine: Using SSH client type: native
	I1018 15:06:26.719765  352142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1018 15:06:26.719782  352142 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:06:26.970657  352142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:06:26.970682  352142 machine.go:96] duration metric: took 1.269467705s to provisionDockerMachine
	I1018 15:06:26.970692  352142 client.go:171] duration metric: took 7.791810529s to LocalClient.Create
	I1018 15:06:26.970712  352142 start.go:167] duration metric: took 7.791877225s to libmachine.API.Create "newest-cni-741831"
	I1018 15:06:26.970719  352142 start.go:293] postStartSetup for "newest-cni-741831" (driver="docker")
	I1018 15:06:26.970729  352142 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:06:26.970806  352142 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:06:26.970861  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:26.988335  352142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:27.087221  352142 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:06:27.090783  352142 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 15:06:27.090809  352142 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 15:06:27.090827  352142 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/addons for local assets ...
	I1018 15:06:27.090877  352142 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/files for local assets ...
	I1018 15:06:27.090972  352142 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem -> 931872.pem in /etc/ssl/certs
	I1018 15:06:27.091056  352142 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:06:27.098707  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:06:27.118867  352142 start.go:296] duration metric: took 148.132063ms for postStartSetup
	I1018 15:06:27.119258  352142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-741831
	I1018 15:06:27.138075  352142 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/config.json ...
	I1018 15:06:27.138321  352142 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 15:06:27.138366  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:27.155272  352142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:27.249460  352142 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 15:06:27.254563  352142 start.go:128] duration metric: took 8.077835013s to createHost
	I1018 15:06:27.254590  352142 start.go:83] releasing machines lock for "newest-cni-741831", held for 8.077985561s
	I1018 15:06:27.254660  352142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-741831
	I1018 15:06:27.273539  352142 ssh_runner.go:195] Run: cat /version.json
	I1018 15:06:27.273588  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:27.273628  352142 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:06:27.273693  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:27.291712  352142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:27.292133  352142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:27.438032  352142 ssh_runner.go:195] Run: systemctl --version
	I1018 15:06:27.444732  352142 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:06:27.480771  352142 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:06:27.485774  352142 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:06:27.485841  352142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:06:27.512064  352142 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 15:06:27.512089  352142 start.go:495] detecting cgroup driver to use...
	I1018 15:06:27.512126  352142 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 15:06:27.512175  352142 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:06:27.528665  352142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:06:27.541203  352142 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:06:27.541255  352142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:06:27.557700  352142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:06:27.577069  352142 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:06:27.661864  352142 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:06:27.751078  352142 docker.go:234] disabling docker service ...
	I1018 15:06:27.751149  352142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:06:27.771123  352142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:06:27.787019  352142 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:06:27.884416  352142 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:06:27.973822  352142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:06:27.986604  352142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:06:28.000991  352142 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 15:06:28.001058  352142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.011828  352142 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 15:06:28.011896  352142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.020931  352142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.030085  352142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.039092  352142 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:06:28.047412  352142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.055961  352142 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.069830  352142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.079271  352142 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:06:28.087557  352142 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:06:28.095726  352142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:06:28.204871  352142 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:06:28.308340  352142 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:06:28.308400  352142 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:06:28.312652  352142 start.go:563] Will wait 60s for crictl version
	I1018 15:06:28.312706  352142 ssh_runner.go:195] Run: which crictl
	I1018 15:06:28.316479  352142 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 15:06:28.342582  352142 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 15:06:28.342759  352142 ssh_runner.go:195] Run: crio --version
	I1018 15:06:28.371661  352142 ssh_runner.go:195] Run: crio --version
	I1018 15:06:28.404027  352142 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 15:06:28.405208  352142 cli_runner.go:164] Run: docker network inspect newest-cni-741831 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:06:28.422412  352142 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 15:06:28.426696  352142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:06:28.438922  352142 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 15:06:28.440159  352142 kubeadm.go:883] updating cluster {Name:newest-cni-741831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-741831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 15:06:28.440298  352142 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:06:28.440369  352142 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:06:28.471339  352142 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:06:28.471358  352142 crio.go:433] Images already preloaded, skipping extraction
	I1018 15:06:28.471399  352142 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:06:28.498054  352142 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:06:28.498077  352142 cache_images.go:85] Images are preloaded, skipping loading
	I1018 15:06:28.498085  352142 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1018 15:06:28.498165  352142 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-741831 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-741831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 15:06:28.498226  352142 ssh_runner.go:195] Run: crio config
	I1018 15:06:28.544284  352142 cni.go:84] Creating CNI manager for ""
	I1018 15:06:28.544310  352142 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:06:28.544334  352142 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 15:06:28.544364  352142 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-741831 NodeName:newest-cni-741831 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 15:06:28.544529  352142 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-741831"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 15:06:28.544591  352142 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 15:06:28.552919  352142 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 15:06:28.552987  352142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 15:06:28.560695  352142 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 15:06:28.573650  352142 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 15:06:28.589169  352142 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1018 15:06:28.602324  352142 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 15:06:28.606123  352142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:06:28.616292  352142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:06:28.702657  352142 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:06:28.728867  352142 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831 for IP: 192.168.94.2
	I1018 15:06:28.728898  352142 certs.go:195] generating shared ca certs ...
	I1018 15:06:28.728944  352142 certs.go:227] acquiring lock for ca certs: {Name:mk6d3b4e44856f498157352ffe8bb89d2c7a3998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:28.729163  352142 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key
	I1018 15:06:28.729240  352142 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key
	I1018 15:06:28.729254  352142 certs.go:257] generating profile certs ...
	I1018 15:06:28.729414  352142 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/client.key
	I1018 15:06:28.729451  352142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/client.crt with IP's: []
	I1018 15:06:28.792470  352142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/client.crt ...
	I1018 15:06:28.792500  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/client.crt: {Name:mke8e96a052b8eb8b398b73425f8e5ee1007513d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:28.792716  352142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/client.key ...
	I1018 15:06:28.792733  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/client.key: {Name:mk9c5cc06cccf0052c525e1e52278d7f0300c686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:28.792854  352142 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.key.5b00fbe4
	I1018 15:06:28.792878  352142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.crt.5b00fbe4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	W1018 15:06:24.565716  347067 node_ready.go:57] node "default-k8s-diff-port-489104" has "Ready":"False" status (will retry)
	W1018 15:06:27.064596  347067 node_ready.go:57] node "default-k8s-diff-port-489104" has "Ready":"False" status (will retry)
	I1018 15:06:28.065074  347067 node_ready.go:49] node "default-k8s-diff-port-489104" is "Ready"
	I1018 15:06:28.065102  347067 node_ready.go:38] duration metric: took 12.003457865s for node "default-k8s-diff-port-489104" to be "Ready" ...
	I1018 15:06:28.065119  347067 api_server.go:52] waiting for apiserver process to appear ...
	I1018 15:06:28.065157  347067 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:06:28.076701  347067 api_server.go:72] duration metric: took 12.349786258s to wait for apiserver process to appear ...
	I1018 15:06:28.076733  347067 api_server.go:88] waiting for apiserver healthz status ...
	I1018 15:06:28.076752  347067 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1018 15:06:28.081593  347067 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1018 15:06:28.082688  347067 api_server.go:141] control plane version: v1.34.1
	I1018 15:06:28.082715  347067 api_server.go:131] duration metric: took 5.974362ms to wait for apiserver health ...
	I1018 15:06:28.082726  347067 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:06:28.086013  347067 system_pods.go:59] 8 kube-system pods found
	I1018 15:06:28.086058  347067 system_pods.go:61] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:06:28.086070  347067 system_pods.go:61] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running
	I1018 15:06:28.086083  347067 system_pods.go:61] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:06:28.086088  347067 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running
	I1018 15:06:28.086097  347067 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running
	I1018 15:06:28.086103  347067 system_pods.go:61] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:06:28.086110  347067 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running
	I1018 15:06:28.086118  347067 system_pods.go:61] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:06:28.086130  347067 system_pods.go:74] duration metric: took 3.396495ms to wait for pod list to return data ...
	I1018 15:06:28.086142  347067 default_sa.go:34] waiting for default service account to be created ...
	I1018 15:06:28.088698  347067 default_sa.go:45] found service account: "default"
	I1018 15:06:28.088719  347067 default_sa.go:55] duration metric: took 2.569918ms for default service account to be created ...
	I1018 15:06:28.088729  347067 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 15:06:28.091501  347067 system_pods.go:86] 8 kube-system pods found
	I1018 15:06:28.091528  347067 system_pods.go:89] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:06:28.091534  347067 system_pods.go:89] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running
	I1018 15:06:28.091540  347067 system_pods.go:89] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:06:28.091543  347067 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running
	I1018 15:06:28.091547  347067 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running
	I1018 15:06:28.091550  347067 system_pods.go:89] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:06:28.091554  347067 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running
	I1018 15:06:28.091558  347067 system_pods.go:89] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:06:28.091576  347067 retry.go:31] will retry after 228.914741ms: missing components: kube-dns
	I1018 15:06:28.325222  347067 system_pods.go:86] 8 kube-system pods found
	I1018 15:06:28.325259  347067 system_pods.go:89] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:06:28.325267  347067 system_pods.go:89] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running
	I1018 15:06:28.325275  347067 system_pods.go:89] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:06:28.325281  347067 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running
	I1018 15:06:28.325287  347067 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running
	I1018 15:06:28.325292  347067 system_pods.go:89] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:06:28.325297  347067 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running
	I1018 15:06:28.325304  347067 system_pods.go:89] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:06:28.325327  347067 retry.go:31] will retry after 353.361454ms: missing components: kube-dns
	I1018 15:06:28.682887  347067 system_pods.go:86] 8 kube-system pods found
	I1018 15:06:28.682948  347067 system_pods.go:89] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:06:28.682958  347067 system_pods.go:89] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running
	I1018 15:06:28.682966  347067 system_pods.go:89] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:06:28.682974  347067 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running
	I1018 15:06:28.682981  347067 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running
	I1018 15:06:28.682991  347067 system_pods.go:89] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:06:28.682997  347067 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running
	I1018 15:06:28.683008  347067 system_pods.go:89] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:06:28.683029  347067 retry.go:31] will retry after 298.181886ms: missing components: kube-dns
	I1018 15:06:28.986254  347067 system_pods.go:86] 8 kube-system pods found
	I1018 15:06:28.986282  347067 system_pods.go:89] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Running
	I1018 15:06:28.986288  347067 system_pods.go:89] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running
	I1018 15:06:28.986292  347067 system_pods.go:89] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:06:28.986296  347067 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running
	I1018 15:06:28.986299  347067 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running
	I1018 15:06:28.986302  347067 system_pods.go:89] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:06:28.986305  347067 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running
	I1018 15:06:28.986308  347067 system_pods.go:89] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Running
	I1018 15:06:28.986316  347067 system_pods.go:126] duration metric: took 897.58086ms to wait for k8s-apps to be running ...
	I1018 15:06:28.986323  347067 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 15:06:28.986366  347067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:06:28.999817  347067 system_svc.go:56] duration metric: took 13.480567ms WaitForService to wait for kubelet
	I1018 15:06:28.999843  347067 kubeadm.go:586] duration metric: took 13.272933961s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:06:28.999865  347067 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:06:29.003008  347067 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 15:06:29.003035  347067 node_conditions.go:123] node cpu capacity is 8
	I1018 15:06:29.003050  347067 node_conditions.go:105] duration metric: took 3.181093ms to run NodePressure ...
	I1018 15:06:29.003062  347067 start.go:241] waiting for startup goroutines ...
	I1018 15:06:29.003069  347067 start.go:246] waiting for cluster config update ...
	I1018 15:06:29.003089  347067 start.go:255] writing updated cluster config ...
	I1018 15:06:29.003370  347067 ssh_runner.go:195] Run: rm -f paused
	I1018 15:06:29.007398  347067 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:06:29.011225  347067 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dtjgd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.015433  347067 pod_ready.go:94] pod "coredns-66bc5c9577-dtjgd" is "Ready"
	I1018 15:06:29.015452  347067 pod_ready.go:86] duration metric: took 4.205314ms for pod "coredns-66bc5c9577-dtjgd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.017638  347067 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.021381  347067 pod_ready.go:94] pod "etcd-default-k8s-diff-port-489104" is "Ready"
	I1018 15:06:29.021399  347067 pod_ready.go:86] duration metric: took 3.738445ms for pod "etcd-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.023308  347067 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.026567  347067 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-489104" is "Ready"
	I1018 15:06:29.026587  347067 pod_ready.go:86] duration metric: took 3.257885ms for pod "kube-apiserver-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.028296  347067 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.063010  352142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.crt.5b00fbe4 ...
	I1018 15:06:29.063038  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.crt.5b00fbe4: {Name:mk3d0668ddae7d28b699df3536f8e4c4c7dbf460 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:29.063212  352142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.key.5b00fbe4 ...
	I1018 15:06:29.063226  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.key.5b00fbe4: {Name:mkd60891ad06419625ec1cb1227353159cfb6546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:29.063304  352142 certs.go:382] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.crt.5b00fbe4 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.crt
	I1018 15:06:29.063375  352142 certs.go:386] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.key.5b00fbe4 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.key
	I1018 15:06:29.063429  352142 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.key
	I1018 15:06:29.063450  352142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.crt with IP's: []
	I1018 15:06:29.291547  352142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.crt ...
	I1018 15:06:29.291575  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.crt: {Name:mk9053fa1d59e516145d535ccf928a7a4620007b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:29.291747  352142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.key ...
	I1018 15:06:29.291760  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.key: {Name:mk718982611c021d2ca690df47a58e465ee8a410 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:29.291962  352142 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem (1338 bytes)
	W1018 15:06:29.292002  352142 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187_empty.pem, impossibly tiny 0 bytes
	I1018 15:06:29.292011  352142 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 15:06:29.292032  352142 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem (1082 bytes)
	I1018 15:06:29.292057  352142 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem (1123 bytes)
	I1018 15:06:29.292078  352142 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem (1675 bytes)
	I1018 15:06:29.292125  352142 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:06:29.292759  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 15:06:29.311389  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 15:06:29.329272  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 15:06:29.346654  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 15:06:29.364002  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 15:06:29.382237  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 15:06:29.400409  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 15:06:29.420478  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 15:06:29.440286  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 15:06:29.460529  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem --> /usr/share/ca-certificates/93187.pem (1338 bytes)
	I1018 15:06:29.478328  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /usr/share/ca-certificates/931872.pem (1708 bytes)
	I1018 15:06:29.495696  352142 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 15:06:29.508533  352142 ssh_runner.go:195] Run: openssl version
	I1018 15:06:29.514752  352142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 15:06:29.523282  352142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:06:29.527456  352142 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:06:29.527507  352142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:06:29.562619  352142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 15:06:29.573830  352142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93187.pem && ln -fs /usr/share/ca-certificates/93187.pem /etc/ssl/certs/93187.pem"
	I1018 15:06:29.582909  352142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93187.pem
	I1018 15:06:29.587243  352142 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 14:26 /usr/share/ca-certificates/93187.pem
	I1018 15:06:29.587318  352142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93187.pem
	I1018 15:06:29.624088  352142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93187.pem /etc/ssl/certs/51391683.0"
	I1018 15:06:29.633601  352142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/931872.pem && ln -fs /usr/share/ca-certificates/931872.pem /etc/ssl/certs/931872.pem"
	I1018 15:06:29.642526  352142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/931872.pem
	I1018 15:06:29.646524  352142 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 14:26 /usr/share/ca-certificates/931872.pem
	I1018 15:06:29.646586  352142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/931872.pem
	I1018 15:06:29.681559  352142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/931872.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 15:06:29.690980  352142 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 15:06:29.694862  352142 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 15:06:29.694929  352142 kubeadm.go:400] StartCluster: {Name:newest-cni-741831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-741831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:06:29.695023  352142 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 15:06:29.695110  352142 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 15:06:29.723568  352142 cri.go:89] found id: ""
	I1018 15:06:29.723638  352142 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 15:06:29.731906  352142 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 15:06:29.740249  352142 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 15:06:29.740294  352142 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 15:06:29.748230  352142 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 15:06:29.748252  352142 kubeadm.go:157] found existing configuration files:
	
	I1018 15:06:29.748291  352142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 15:06:29.756351  352142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 15:06:29.756404  352142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 15:06:29.764376  352142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 15:06:29.772207  352142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 15:06:29.772260  352142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 15:06:29.779898  352142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 15:06:29.787823  352142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 15:06:29.787891  352142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 15:06:29.795735  352142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 15:06:29.803573  352142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 15:06:29.803624  352142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 15:06:29.811038  352142 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 15:06:29.850187  352142 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 15:06:29.850272  352142 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 15:06:29.871064  352142 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 15:06:29.871172  352142 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 15:06:29.871247  352142 kubeadm.go:318] OS: Linux
	I1018 15:06:29.871372  352142 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 15:06:29.871447  352142 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 15:06:29.871518  352142 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 15:06:29.871595  352142 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 15:06:29.871671  352142 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 15:06:29.871761  352142 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 15:06:29.871839  352142 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 15:06:29.871898  352142 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 15:06:29.935613  352142 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 15:06:29.935785  352142 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 15:06:29.935942  352142 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 15:06:29.943662  352142 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 15:06:29.412888  347067 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-489104" is "Ready"
	I1018 15:06:29.412928  347067 pod_ready.go:86] duration metric: took 384.611351ms for pod "kube-controller-manager-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.612847  347067 pod_ready.go:83] waiting for pod "kube-proxy-7wbfs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:30.011983  347067 pod_ready.go:94] pod "kube-proxy-7wbfs" is "Ready"
	I1018 15:06:30.012012  347067 pod_ready.go:86] duration metric: took 399.134641ms for pod "kube-proxy-7wbfs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:30.211949  347067 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:30.612493  347067 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-489104" is "Ready"
	I1018 15:06:30.612525  347067 pod_ready.go:86] duration metric: took 400.540545ms for pod "kube-scheduler-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:30.612538  347067 pod_ready.go:40] duration metric: took 1.60510698s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:06:30.661514  347067 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:06:30.663459  347067 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-489104" cluster and "default" namespace by default
	I1018 15:06:29.948047  352142 out.go:252]   - Generating certificates and keys ...
	I1018 15:06:29.948147  352142 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 15:06:29.948229  352142 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 15:06:30.250963  352142 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 15:06:30.366731  352142 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 15:06:30.535222  352142 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 15:06:30.853257  352142 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 15:06:31.046320  352142 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 15:06:31.046555  352142 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-741831] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 15:06:31.171804  352142 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 15:06:31.172019  352142 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-741831] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 15:06:32.275618  352142 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 15:06:33.097457  352142 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 15:06:33.197652  352142 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 15:06:33.197773  352142 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 15:06:33.308356  352142 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 15:06:33.547102  352142 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 15:06:33.677173  352142 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 15:06:34.208214  352142 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 15:06:34.302781  352142 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 15:06:34.303476  352142 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 15:06:34.308455  352142 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Oct 18 15:06:03 no-preload-165275 crio[556]: time="2025-10-18T15:06:03.617503198Z" level=info msg="Started container" PID=1724 containerID=1ea0d2250d60235dd8ab7570ad302ef35617e4a9b3276fb327f6980e483073ab description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468/dashboard-metrics-scraper id=66e7306e-b51b-4d98-86f1-6c7c8c1ca055 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9ace59796e3ac4dc8476d8b04a9ba2ecd161b3698c889f248a8e3aa87a7c9650
	Oct 18 15:06:03 no-preload-165275 crio[556]: time="2025-10-18T15:06:03.686817508Z" level=info msg="Removing container: 609c9df9c9c317c944ad79690ed3684ddc5526e73335deadb8ac7e06f6710b54" id=ab3db32b-745c-4add-aafa-bcd13b2fd219 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:06:03 no-preload-165275 crio[556]: time="2025-10-18T15:06:03.696817358Z" level=info msg="Removed container 609c9df9c9c317c944ad79690ed3684ddc5526e73335deadb8ac7e06f6710b54: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468/dashboard-metrics-scraper" id=ab3db32b-745c-4add-aafa-bcd13b2fd219 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.715177431Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b74bda98-017c-45dd-be22-c8de3aab835f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.716139221Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1c156b98-a5ea-4e41-8df1-e5163d874fc8 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.717258816Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=42ac826d-d9e9-4df6-b21c-7c692742b77d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.717532767Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.724438938Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.724641941Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ce8ae44cd8968e8cd02b513f88f15ce63817d634151755feec57ae2aa624ba99/merged/etc/passwd: no such file or directory"
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.724683993Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ce8ae44cd8968e8cd02b513f88f15ce63817d634151755feec57ae2aa624ba99/merged/etc/group: no such file or directory"
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.725049904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.756537622Z" level=info msg="Created container bd99f521b104d52b1eeda1acab34e9e643be51bbb7965e0beed26ac85243b990: kube-system/storage-provisioner/storage-provisioner" id=42ac826d-d9e9-4df6-b21c-7c692742b77d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.757252809Z" level=info msg="Starting container: bd99f521b104d52b1eeda1acab34e9e643be51bbb7965e0beed26ac85243b990" id=c06e7d88-d508-4550-a419-c8aa4e84921a name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.759359373Z" level=info msg="Started container" PID=1738 containerID=bd99f521b104d52b1eeda1acab34e9e643be51bbb7965e0beed26ac85243b990 description=kube-system/storage-provisioner/storage-provisioner id=c06e7d88-d508-4550-a419-c8aa4e84921a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b6604b1d1a7ac7b7bcc760ac60f0458edc3f2d0e5d0f4769caa10b24ce55e04c
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.572908097Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fbcd610f-78c2-4e56-a194-f8c80fbbefb1 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.573877584Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bb2d95b2-a79d-43c1-b8e9-5a64e26b1306 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.575021268Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468/dashboard-metrics-scraper" id=738120ce-d509-4f0a-828f-bbe5a6f0c97a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.575293944Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.580579731Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.581089922Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.611615842Z" level=info msg="Created container 75a93b11eb42f4e75eb477490277c486c91ad8b2a0193107139e8ee97e00ca8c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468/dashboard-metrics-scraper" id=738120ce-d509-4f0a-828f-bbe5a6f0c97a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.612350635Z" level=info msg="Starting container: 75a93b11eb42f4e75eb477490277c486c91ad8b2a0193107139e8ee97e00ca8c" id=5f9f0836-5e3b-49be-b183-874868c9d0f4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.614803871Z" level=info msg="Started container" PID=1774 containerID=75a93b11eb42f4e75eb477490277c486c91ad8b2a0193107139e8ee97e00ca8c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468/dashboard-metrics-scraper id=5f9f0836-5e3b-49be-b183-874868c9d0f4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9ace59796e3ac4dc8476d8b04a9ba2ecd161b3698c889f248a8e3aa87a7c9650
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.750638037Z" level=info msg="Removing container: 1ea0d2250d60235dd8ab7570ad302ef35617e4a9b3276fb327f6980e483073ab" id=14bb2112-49eb-46ab-9dfb-286774147c8c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.764121612Z" level=info msg="Removed container 1ea0d2250d60235dd8ab7570ad302ef35617e4a9b3276fb327f6980e483073ab: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468/dashboard-metrics-scraper" id=14bb2112-49eb-46ab-9dfb-286774147c8c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	75a93b11eb42f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   3                   9ace59796e3ac       dashboard-metrics-scraper-6ffb444bf9-vd468   kubernetes-dashboard
	bd99f521b104d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   b6604b1d1a7ac       storage-provisioner                          kube-system
	40d37635759ff       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   e1f42a7afcec3       kubernetes-dashboard-855c9754f9-4l599        kubernetes-dashboard
	b7590848e3e2c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   05c31e78396dd       busybox                                      default
	d8cad0e51da9b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   c24acd414200b       coredns-66bc5c9577-cmgb8                     kube-system
	a61bf08741c20       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   fa8b2b1d4d966       kindnet-8c5w4                                kube-system
	f24643699519e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   b6604b1d1a7ac       storage-provisioner                          kube-system
	d8643b50024f1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   c37dbb78327f2       kube-proxy-84fhl                             kube-system
	d37cf270acf4c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   7141a9e314ca6       kube-controller-manager-no-preload-165275    kube-system
	ce5891388244a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   c835c15571cb9       kube-apiserver-no-preload-165275             kube-system
	c1d28d4d24c3e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   9907de81fe5e2       kube-scheduler-no-preload-165275             kube-system
	3e2c583673b99       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   9e5d9a5eb49d5       etcd-no-preload-165275                       kube-system
	
	
	==> coredns [d8cad0e51da9ba1a5945123306231034e96864d53528c3d0398f4332e290fd40] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44093 - 63494 "HINFO IN 7903083053169130360.1446552966389043242. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.300822671s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-165275
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-165275
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=no-preload-165275
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_04_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:04:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-165275
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:06:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:06:33 +0000   Sat, 18 Oct 2025 15:04:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:06:33 +0000   Sat, 18 Oct 2025 15:04:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:06:33 +0000   Sat, 18 Oct 2025 15:04:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 15:06:33 +0000   Sat, 18 Oct 2025 15:05:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-165275
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                6d727dff-cef3-4b2d-bb6c-d6d48f30b9ab
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-cmgb8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-no-preload-165275                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-8c5w4                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-no-preload-165275              250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-165275     200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-84fhl                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-no-preload-165275              100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vd468    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4l599         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node no-preload-165275 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node no-preload-165275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node no-preload-165275 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     115s               kubelet          Node no-preload-165275 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node no-preload-165275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node no-preload-165275 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           111s               node-controller  Node no-preload-165275 event: Registered Node no-preload-165275 in Controller
	  Normal  NodeReady                96s                kubelet          Node no-preload-165275 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node no-preload-165275 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node no-preload-165275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node no-preload-165275 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node no-preload-165275 event: Registered Node no-preload-165275 in Controller
	
	
	==> dmesg <==
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [3e2c583673b99348bde570e54f1913de407877ce7969439954326ffcf6f4fc31] <==
	{"level":"warn","ts":"2025-10-18T15:05:41.393485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.401940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.410780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.430842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.436810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.444863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.469163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.486527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.497157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.503971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.515170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.523659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.532659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.540601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.549522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.557779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.566539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.574778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.583205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.597574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.605580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.614017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.680175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39960","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T15:05:55.502064Z","caller":"traceutil/trace.go:172","msg":"trace[1869035317] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"127.523053ms","start":"2025-10-18T15:05:55.374514Z","end":"2025-10-18T15:05:55.502037Z","steps":["trace[1869035317] 'process raft request'  (duration: 114.388633ms)","trace[1869035317] 'compare'  (duration: 12.953597ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T15:05:55.725104Z","caller":"traceutil/trace.go:172","msg":"trace[253110695] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"143.721998ms","start":"2025-10-18T15:05:55.581348Z","end":"2025-10-18T15:05:55.725070Z","steps":["trace[253110695] 'process raft request'  (duration: 125.670218ms)","trace[253110695] 'compare'  (duration: 17.946928ms)"],"step_count":2}
	
	
	==> kernel <==
	 15:06:38 up  2:49,  0 user,  load average: 2.88, 2.80, 1.95
	Linux no-preload-165275 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a61bf08741c2071de9d41f7d9a959c9d0202f13a22c5d7343ac7bb3c3b93e5e2] <==
	I1018 15:05:43.216289       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 15:05:43.216585       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 15:05:43.216748       1 main.go:148] setting mtu 1500 for CNI 
	I1018 15:05:43.216771       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 15:05:43.216795       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T15:05:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 15:05:43.511886       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 15:05:43.614322       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 15:05:43.614346       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 15:05:43.614558       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 15:05:44.015225       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 15:05:44.015256       1 metrics.go:72] Registering metrics
	I1018 15:05:44.015314       1 controller.go:711] "Syncing nftables rules"
	I1018 15:05:53.427880       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 15:05:53.427983       1 main.go:301] handling current node
	I1018 15:06:03.430402       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 15:06:03.430456       1 main.go:301] handling current node
	I1018 15:06:13.426076       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 15:06:13.426107       1 main.go:301] handling current node
	I1018 15:06:23.426109       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 15:06:23.426141       1 main.go:301] handling current node
	I1018 15:06:33.426955       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 15:06:33.427008       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ce5891388244aaa439d0521f1c59f74520a5be8cfe55bae6fec434a5125ea972] <==
	I1018 15:05:42.296235       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 15:05:42.296330       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 15:05:42.300156       1 aggregator.go:171] initial CRD sync complete...
	I1018 15:05:42.300173       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 15:05:42.300181       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 15:05:42.300187       1 cache.go:39] Caches are synced for autoregister controller
	I1018 15:05:42.296651       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 15:05:42.296667       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 15:05:42.300377       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 15:05:42.296293       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 15:05:42.311961       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 15:05:42.350204       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 15:05:42.350777       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:05:42.381850       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 15:05:42.553258       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 15:05:42.691316       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 15:05:42.730209       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 15:05:42.753454       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:05:42.762022       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:05:42.812181       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.171.236"}
	I1018 15:05:42.827996       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.214.94"}
	I1018 15:05:43.197383       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:05:45.609210       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 15:05:45.705249       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 15:05:45.756054       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d37cf270acf4cdb482c3d7fdb5fa2e8ecdf544a1b1172db005a424e0b482c119] <==
	I1018 15:05:45.192307       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 15:05:45.194587       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 15:05:45.196838       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 15:05:45.196955       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 15:05:45.197065       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-165275"
	I1018 15:05:45.197213       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 15:05:45.198155       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 15:05:45.200489       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 15:05:45.202780       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 15:05:45.202812       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 15:05:45.202893       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 15:05:45.203025       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 15:05:45.202941       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 15:05:45.203900       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 15:05:45.204011       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 15:05:45.204320       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 15:05:45.206424       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 15:05:45.207652       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 15:05:45.208734       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 15:05:45.208806       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 15:05:45.208854       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 15:05:45.208861       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 15:05:45.208867       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 15:05:45.208880       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 15:05:45.232108       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [d8643b50024f13026b83ef70e0a7a12d1d5fc9a309e6bcd49fa11236a78579ff] <==
	I1018 15:05:43.015232       1 server_linux.go:53] "Using iptables proxy"
	I1018 15:05:43.078262       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 15:05:43.178830       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 15:05:43.178871       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 15:05:43.179008       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 15:05:43.199405       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 15:05:43.199472       1 server_linux.go:132] "Using iptables Proxier"
	I1018 15:05:43.206159       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 15:05:43.206663       1 server.go:527] "Version info" version="v1.34.1"
	I1018 15:05:43.206701       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:05:43.211863       1 config.go:200] "Starting service config controller"
	I1018 15:05:43.212082       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 15:05:43.212255       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 15:05:43.212337       1 config.go:106] "Starting endpoint slice config controller"
	I1018 15:05:43.212407       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 15:05:43.212012       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 15:05:43.212306       1 config.go:309] "Starting node config controller"
	I1018 15:05:43.212966       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 15:05:43.213026       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 15:05:43.312479       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 15:05:43.312768       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 15:05:43.312768       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c1d28d4d24c3ece98b690ada9bd56a5d7ebdd925b9e2320e8f7d9f1b62f77b34] <==
	I1018 15:05:40.716722       1 serving.go:386] Generated self-signed cert in-memory
	W1018 15:05:42.228926       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 15:05:42.231089       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 15:05:42.231192       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 15:05:42.231227       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 15:05:42.290065       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 15:05:42.290161       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:05:42.293215       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 15:05:42.293341       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:05:42.296743       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:05:42.293371       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 15:05:42.396927       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 15:05:49 no-preload-165275 kubelet[702]: I1018 15:05:49.637616     702 scope.go:117] "RemoveContainer" containerID="609c9df9c9c317c944ad79690ed3684ddc5526e73335deadb8ac7e06f6710b54"
	Oct 18 15:05:49 no-preload-165275 kubelet[702]: E1018 15:05:49.637778     702 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vd468_kubernetes-dashboard(d57570e6-aad1-4c12-a379-6f8f0e8c277c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468" podUID="d57570e6-aad1-4c12-a379-6f8f0e8c277c"
	Oct 18 15:05:50 no-preload-165275 kubelet[702]: I1018 15:05:50.646759     702 scope.go:117] "RemoveContainer" containerID="609c9df9c9c317c944ad79690ed3684ddc5526e73335deadb8ac7e06f6710b54"
	Oct 18 15:05:50 no-preload-165275 kubelet[702]: E1018 15:05:50.649948     702 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vd468_kubernetes-dashboard(d57570e6-aad1-4c12-a379-6f8f0e8c277c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468" podUID="d57570e6-aad1-4c12-a379-6f8f0e8c277c"
	Oct 18 15:05:51 no-preload-165275 kubelet[702]: I1018 15:05:51.649410     702 scope.go:117] "RemoveContainer" containerID="609c9df9c9c317c944ad79690ed3684ddc5526e73335deadb8ac7e06f6710b54"
	Oct 18 15:05:51 no-preload-165275 kubelet[702]: E1018 15:05:51.649644     702 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vd468_kubernetes-dashboard(d57570e6-aad1-4c12-a379-6f8f0e8c277c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468" podUID="d57570e6-aad1-4c12-a379-6f8f0e8c277c"
	Oct 18 15:05:51 no-preload-165275 kubelet[702]: I1018 15:05:51.955934     702 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 15:05:55 no-preload-165275 kubelet[702]: I1018 15:05:55.365367     702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4l599" podStartSLOduration=3.695280231 podStartE2EDuration="10.365342384s" podCreationTimestamp="2025-10-18 15:05:45 +0000 UTC" firstStartedPulling="2025-10-18 15:05:46.178371919 +0000 UTC m=+6.712601151" lastFinishedPulling="2025-10-18 15:05:52.848434076 +0000 UTC m=+13.382663304" observedRunningTime="2025-10-18 15:05:53.718286731 +0000 UTC m=+14.252515973" watchObservedRunningTime="2025-10-18 15:05:55.365342384 +0000 UTC m=+15.899571626"
	Oct 18 15:06:03 no-preload-165275 kubelet[702]: I1018 15:06:03.572127     702 scope.go:117] "RemoveContainer" containerID="609c9df9c9c317c944ad79690ed3684ddc5526e73335deadb8ac7e06f6710b54"
	Oct 18 15:06:03 no-preload-165275 kubelet[702]: I1018 15:06:03.685510     702 scope.go:117] "RemoveContainer" containerID="609c9df9c9c317c944ad79690ed3684ddc5526e73335deadb8ac7e06f6710b54"
	Oct 18 15:06:03 no-preload-165275 kubelet[702]: I1018 15:06:03.685740     702 scope.go:117] "RemoveContainer" containerID="1ea0d2250d60235dd8ab7570ad302ef35617e4a9b3276fb327f6980e483073ab"
	Oct 18 15:06:03 no-preload-165275 kubelet[702]: E1018 15:06:03.685976     702 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vd468_kubernetes-dashboard(d57570e6-aad1-4c12-a379-6f8f0e8c277c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468" podUID="d57570e6-aad1-4c12-a379-6f8f0e8c277c"
	Oct 18 15:06:10 no-preload-165275 kubelet[702]: I1018 15:06:10.163582     702 scope.go:117] "RemoveContainer" containerID="1ea0d2250d60235dd8ab7570ad302ef35617e4a9b3276fb327f6980e483073ab"
	Oct 18 15:06:10 no-preload-165275 kubelet[702]: E1018 15:06:10.163765     702 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vd468_kubernetes-dashboard(d57570e6-aad1-4c12-a379-6f8f0e8c277c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468" podUID="d57570e6-aad1-4c12-a379-6f8f0e8c277c"
	Oct 18 15:06:13 no-preload-165275 kubelet[702]: I1018 15:06:13.714658     702 scope.go:117] "RemoveContainer" containerID="f24643699519eede5987d2db64babc61ecbb2bc1fbe89e7d24e540599e9fda2c"
	Oct 18 15:06:24 no-preload-165275 kubelet[702]: I1018 15:06:24.572333     702 scope.go:117] "RemoveContainer" containerID="1ea0d2250d60235dd8ab7570ad302ef35617e4a9b3276fb327f6980e483073ab"
	Oct 18 15:06:24 no-preload-165275 kubelet[702]: I1018 15:06:24.748372     702 scope.go:117] "RemoveContainer" containerID="1ea0d2250d60235dd8ab7570ad302ef35617e4a9b3276fb327f6980e483073ab"
	Oct 18 15:06:24 no-preload-165275 kubelet[702]: I1018 15:06:24.749043     702 scope.go:117] "RemoveContainer" containerID="75a93b11eb42f4e75eb477490277c486c91ad8b2a0193107139e8ee97e00ca8c"
	Oct 18 15:06:24 no-preload-165275 kubelet[702]: E1018 15:06:24.749285     702 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vd468_kubernetes-dashboard(d57570e6-aad1-4c12-a379-6f8f0e8c277c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468" podUID="d57570e6-aad1-4c12-a379-6f8f0e8c277c"
	Oct 18 15:06:30 no-preload-165275 kubelet[702]: I1018 15:06:30.163368     702 scope.go:117] "RemoveContainer" containerID="75a93b11eb42f4e75eb477490277c486c91ad8b2a0193107139e8ee97e00ca8c"
	Oct 18 15:06:30 no-preload-165275 kubelet[702]: E1018 15:06:30.163589     702 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vd468_kubernetes-dashboard(d57570e6-aad1-4c12-a379-6f8f0e8c277c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468" podUID="d57570e6-aad1-4c12-a379-6f8f0e8c277c"
	Oct 18 15:06:35 no-preload-165275 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 15:06:35 no-preload-165275 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 15:06:35 no-preload-165275 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 15:06:35 no-preload-165275 systemd[1]: kubelet.service: Consumed 1.776s CPU time.
	
	
	==> kubernetes-dashboard [40d37635759ffb9d9f2cb9a03f0e608336ed376a7646906d2e3102badf4b2204] <==
	2025/10/18 15:05:52 Using namespace: kubernetes-dashboard
	2025/10/18 15:05:52 Using in-cluster config to connect to apiserver
	2025/10/18 15:05:52 Using secret token for csrf signing
	2025/10/18 15:05:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 15:05:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 15:05:52 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 15:05:52 Generating JWE encryption key
	2025/10/18 15:05:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 15:05:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 15:05:53 Initializing JWE encryption key from synchronized object
	2025/10/18 15:05:53 Creating in-cluster Sidecar client
	2025/10/18 15:05:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 15:05:53 Serving insecurely on HTTP port: 9090
	2025/10/18 15:06:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 15:05:52 Starting overwatch
	
	
	==> storage-provisioner [bd99f521b104d52b1eeda1acab34e9e643be51bbb7965e0beed26ac85243b990] <==
	I1018 15:06:13.772968       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 15:06:13.781902       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 15:06:13.781967       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 15:06:13.784444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:17.239820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:21.500414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:25.099281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:28.153434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:31.176691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:31.182023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 15:06:31.182194       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 15:06:31.182319       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"297bdaca-635d-490e-89a8-cdf06fe2f03a", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-165275_fb90670f-c83b-45c4-ac94-af6eddde5e55 became leader
	I1018 15:06:31.182368       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-165275_fb90670f-c83b-45c4-ac94-af6eddde5e55!
	W1018 15:06:31.184852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:31.187901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 15:06:31.283277       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-165275_fb90670f-c83b-45c4-ac94-af6eddde5e55!
	W1018 15:06:33.191621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:33.198091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:35.202334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:35.208172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:37.211418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:37.215475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f24643699519eede5987d2db64babc61ecbb2bc1fbe89e7d24e540599e9fda2c] <==
	I1018 15:05:42.994160       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 15:06:12.997445       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-165275 -n no-preload-165275
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-165275 -n no-preload-165275: exit status 2 (373.741407ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-165275 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-165275
helpers_test.go:243: (dbg) docker inspect no-preload-165275:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06",
	        "Created": "2025-10-18T15:04:14.174636016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 341047,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T15:05:32.595965677Z",
	            "FinishedAt": "2025-10-18T15:05:31.463575696Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06/hostname",
	        "HostsPath": "/var/lib/docker/containers/aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06/hosts",
	        "LogPath": "/var/lib/docker/containers/aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06/aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06-json.log",
	        "Name": "/no-preload-165275",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-165275:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-165275",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "aa996275db3e8c70cfc4a61ba2afa243768501cc652f46708f0be8c6b02e0c06",
	                "LowerDir": "/var/lib/docker/overlay2/416402f5dd03bf1ed780ad68ce499c08cfc04a7c16ada619f14034f104f2e745-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/416402f5dd03bf1ed780ad68ce499c08cfc04a7c16ada619f14034f104f2e745/merged",
	                "UpperDir": "/var/lib/docker/overlay2/416402f5dd03bf1ed780ad68ce499c08cfc04a7c16ada619f14034f104f2e745/diff",
	                "WorkDir": "/var/lib/docker/overlay2/416402f5dd03bf1ed780ad68ce499c08cfc04a7c16ada619f14034f104f2e745/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-165275",
	                "Source": "/var/lib/docker/volumes/no-preload-165275/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-165275",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-165275",
	                "name.minikube.sigs.k8s.io": "no-preload-165275",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f17c16ab34f345abf9f30d9f39da3075239d747de88dc57cd0c8f8a84e03442",
	            "SandboxKey": "/var/run/docker/netns/8f17c16ab34f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-165275": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:8d:9d:5d:8a:fa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2decf6b0e9a2edffe7ff29802fe30453af810cd2279b900d48c499fda7236039",
	                    "EndpointID": "decb1b7a47fe613d6c395754ce37b39c788201facac8b0fac4c65463d8400028",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-165275",
	                        "aa996275db3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-165275 -n no-preload-165275
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-165275 -n no-preload-165275: exit status 2 (343.129548ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-165275 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-165275 logs -n 25: (1.281990219s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p old-k8s-version-948537 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-948537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ start   │ -p old-k8s-version-948537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-165275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ stop    │ -p no-preload-165275 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-833162    │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-833162    │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ delete  │ -p kubernetes-upgrade-833162                                                                                                                                                                                                                  │ kubernetes-upgrade-833162    │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ addons  │ enable dashboard -p no-preload-165275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p embed-certs-775590 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p no-preload-165275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:06 UTC │
	│ image   │ old-k8s-version-948537 image list --format=json                                                                                                                                                                                               │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ pause   │ -p old-k8s-version-948537 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ delete  │ -p old-k8s-version-948537                                                                                                                                                                                                                     │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ delete  │ -p old-k8s-version-948537                                                                                                                                                                                                                     │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ delete  │ -p disable-driver-mounts-677415                                                                                                                                                                                                               │ disable-driver-mounts-677415 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p default-k8s-diff-port-489104 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p cert-expiration-327346 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-327346       │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ delete  │ -p cert-expiration-327346                                                                                                                                                                                                                     │ cert-expiration-327346       │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p newest-cni-741831 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-775590 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ stop    │ -p embed-certs-775590 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ image   │ no-preload-165275 image list --format=json                                                                                                                                                                                                    │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ pause   │ -p no-preload-165275 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-489104 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:06:18
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:06:18.990992  352142 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:06:18.991107  352142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:06:18.991114  352142 out.go:374] Setting ErrFile to fd 2...
	I1018 15:06:18.991124  352142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:06:18.991316  352142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:06:18.991810  352142 out.go:368] Setting JSON to false
	I1018 15:06:18.993170  352142 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10130,"bootTime":1760789849,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:06:18.993269  352142 start.go:141] virtualization: kvm guest
	I1018 15:06:18.995348  352142 out.go:179] * [newest-cni-741831] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:06:18.996606  352142 notify.go:220] Checking for updates...
	I1018 15:06:18.996634  352142 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:06:18.997879  352142 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:06:18.999081  352142 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:06:19.000329  352142 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 15:06:19.001580  352142 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:06:19.002773  352142 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:06:19.004542  352142 config.go:182] Loaded profile config "default-k8s-diff-port-489104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:19.004705  352142 config.go:182] Loaded profile config "embed-certs-775590": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:19.004931  352142 config.go:182] Loaded profile config "no-preload-165275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:19.005076  352142 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:06:19.029798  352142 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:06:19.029968  352142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:06:19.087262  352142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 15:06:19.076975606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:06:19.087375  352142 docker.go:318] overlay module found
	I1018 15:06:19.089283  352142 out.go:179] * Using the docker driver based on user configuration
	I1018 15:06:16.235796  347067 addons.go:514] duration metric: took 508.874239ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 15:06:16.564682  347067 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-489104" context rescaled to 1 replicas
	W1018 15:06:18.064585  347067 node_ready.go:57] node "default-k8s-diff-port-489104" has "Ready":"False" status (will retry)
	I1018 15:06:19.090309  352142 start.go:305] selected driver: docker
	I1018 15:06:19.090324  352142 start.go:925] validating driver "docker" against <nil>
	I1018 15:06:19.090335  352142 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:06:19.090980  352142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:06:19.147933  352142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 15:06:19.138241028 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:06:19.148135  352142 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1018 15:06:19.148176  352142 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1018 15:06:19.148433  352142 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 15:06:19.150539  352142 out.go:179] * Using Docker driver with root privileges
	I1018 15:06:19.151779  352142 cni.go:84] Creating CNI manager for ""
	I1018 15:06:19.151848  352142 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:06:19.151872  352142 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 15:06:19.151980  352142 start.go:349] cluster config:
	{Name:newest-cni-741831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-741831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:06:19.153327  352142 out.go:179] * Starting "newest-cni-741831" primary control-plane node in "newest-cni-741831" cluster
	I1018 15:06:19.154334  352142 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:06:19.155556  352142 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:06:19.156744  352142 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:06:19.156787  352142 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 15:06:19.156813  352142 cache.go:58] Caching tarball of preloaded images
	I1018 15:06:19.156868  352142 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 15:06:19.156962  352142 preload.go:233] Found /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 15:06:19.156978  352142 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 15:06:19.157137  352142 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/config.json ...
	I1018 15:06:19.157171  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/config.json: {Name:mkd13aa7acfbed253b9ba5a36cce3dfa1f0aceee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:19.176402  352142 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:06:19.176421  352142 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 15:06:19.176437  352142 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:06:19.176475  352142 start.go:360] acquireMachinesLock for newest-cni-741831: {Name:mk05ea0bcc583fa4b3d237c8091a165605e0fbe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:06:19.176588  352142 start.go:364] duration metric: took 94.483µs to acquireMachinesLock for "newest-cni-741831"
	I1018 15:06:19.176621  352142 start.go:93] Provisioning new machine with config: &{Name:newest-cni-741831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-741831 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:06:19.176710  352142 start.go:125] createHost starting for "" (driver="docker")
	W1018 15:06:18.006003  340627 pod_ready.go:104] pod "coredns-66bc5c9577-cmgb8" is not "Ready", error: <nil>
	W1018 15:06:20.007350  340627 pod_ready.go:104] pod "coredns-66bc5c9577-cmgb8" is not "Ready", error: <nil>
	I1018 15:06:22.007480  340627 pod_ready.go:94] pod "coredns-66bc5c9577-cmgb8" is "Ready"
	I1018 15:06:22.007513  340627 pod_ready.go:86] duration metric: took 38.506906838s for pod "coredns-66bc5c9577-cmgb8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.010793  340627 pod_ready.go:83] waiting for pod "etcd-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.015586  340627 pod_ready.go:94] pod "etcd-no-preload-165275" is "Ready"
	I1018 15:06:22.015617  340627 pod_ready.go:86] duration metric: took 4.797501ms for pod "etcd-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.018019  340627 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.022342  340627 pod_ready.go:94] pod "kube-apiserver-no-preload-165275" is "Ready"
	I1018 15:06:22.022370  340627 pod_ready.go:86] duration metric: took 4.328879ms for pod "kube-apiserver-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.024547  340627 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.205070  340627 pod_ready.go:94] pod "kube-controller-manager-no-preload-165275" is "Ready"
	I1018 15:06:22.205105  340627 pod_ready.go:86] duration metric: took 180.535874ms for pod "kube-controller-manager-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.405341  340627 pod_ready.go:83] waiting for pod "kube-proxy-84fhl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.804708  340627 pod_ready.go:94] pod "kube-proxy-84fhl" is "Ready"
	I1018 15:06:22.804737  340627 pod_ready.go:86] duration metric: took 399.364412ms for pod "kube-proxy-84fhl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:23.009439  340627 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:23.405543  340627 pod_ready.go:94] pod "kube-scheduler-no-preload-165275" is "Ready"
	I1018 15:06:23.405574  340627 pod_ready.go:86] duration metric: took 396.107038ms for pod "kube-scheduler-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:23.405589  340627 pod_ready.go:40] duration metric: took 39.908960633s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:06:23.451163  340627 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:06:23.547653  340627 out.go:179] * Done! kubectl is now configured to use "no-preload-165275" cluster and "default" namespace by default
	I1018 15:06:19.178580  352142 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 15:06:19.178837  352142 start.go:159] libmachine.API.Create for "newest-cni-741831" (driver="docker")
	I1018 15:06:19.178873  352142 client.go:168] LocalClient.Create starting
	I1018 15:06:19.179005  352142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem
	I1018 15:06:19.179061  352142 main.go:141] libmachine: Decoding PEM data...
	I1018 15:06:19.179076  352142 main.go:141] libmachine: Parsing certificate...
	I1018 15:06:19.179132  352142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem
	I1018 15:06:19.179155  352142 main.go:141] libmachine: Decoding PEM data...
	I1018 15:06:19.179164  352142 main.go:141] libmachine: Parsing certificate...
	I1018 15:06:19.179501  352142 cli_runner.go:164] Run: docker network inspect newest-cni-741831 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 15:06:19.196543  352142 cli_runner.go:211] docker network inspect newest-cni-741831 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 15:06:19.196640  352142 network_create.go:284] running [docker network inspect newest-cni-741831] to gather additional debugging logs...
	I1018 15:06:19.196663  352142 cli_runner.go:164] Run: docker network inspect newest-cni-741831
	W1018 15:06:19.213085  352142 cli_runner.go:211] docker network inspect newest-cni-741831 returned with exit code 1
	I1018 15:06:19.213136  352142 network_create.go:287] error running [docker network inspect newest-cni-741831]: docker network inspect newest-cni-741831: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-741831 not found
	I1018 15:06:19.213172  352142 network_create.go:289] output of [docker network inspect newest-cni-741831]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-741831 not found
	
	** /stderr **
	I1018 15:06:19.213347  352142 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:06:19.230587  352142 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-67ded9675d49 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:eb:89:76:0f:a6} reservation:<nil>}
	I1018 15:06:19.231147  352142 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4b365c92bc46 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:db:b6:83:36:75} reservation:<nil>}
	I1018 15:06:19.231748  352142 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ab6063c7cdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:eb:32:cc:ab:b4} reservation:<nil>}
	I1018 15:06:19.232375  352142 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4b571e6f85a5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:35:91:99:08:5b} reservation:<nil>}
	I1018 15:06:19.232993  352142 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-2decf6b0e9a2 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:32:85:60:59:11:56} reservation:<nil>}
	I1018 15:06:19.233747  352142 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f38de0}
	I1018 15:06:19.233775  352142 network_create.go:124] attempt to create docker network newest-cni-741831 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1018 15:06:19.233823  352142 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-741831 newest-cni-741831
	I1018 15:06:19.295382  352142 network_create.go:108] docker network newest-cni-741831 192.168.94.0/24 created
	I1018 15:06:19.295424  352142 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-741831" container
	I1018 15:06:19.295490  352142 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 15:06:19.312794  352142 cli_runner.go:164] Run: docker volume create newest-cni-741831 --label name.minikube.sigs.k8s.io=newest-cni-741831 --label created_by.minikube.sigs.k8s.io=true
	I1018 15:06:19.332326  352142 oci.go:103] Successfully created a docker volume newest-cni-741831
	I1018 15:06:19.332413  352142 cli_runner.go:164] Run: docker run --rm --name newest-cni-741831-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-741831 --entrypoint /usr/bin/test -v newest-cni-741831:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 15:06:19.734788  352142 oci.go:107] Successfully prepared a docker volume newest-cni-741831
	I1018 15:06:19.734843  352142 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:06:19.734868  352142 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 15:06:19.734956  352142 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-741831:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1018 15:06:20.564874  347067 node_ready.go:57] node "default-k8s-diff-port-489104" has "Ready":"False" status (will retry)
	W1018 15:06:22.565092  347067 node_ready.go:57] node "default-k8s-diff-port-489104" has "Ready":"False" status (will retry)
	I1018 15:06:24.339197  352142 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-741831:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.604197145s)
	I1018 15:06:24.339229  352142 kic.go:203] duration metric: took 4.604355206s to extract preloaded images to volume ...
	W1018 15:06:24.339333  352142 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 15:06:24.339364  352142 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 15:06:24.339401  352142 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 15:06:24.406366  352142 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-741831 --name newest-cni-741831 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-741831 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-741831 --network newest-cni-741831 --ip 192.168.94.2 --volume newest-cni-741831:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 15:06:24.727314  352142 cli_runner.go:164] Run: docker container inspect newest-cni-741831 --format={{.State.Running}}
	I1018 15:06:24.750170  352142 cli_runner.go:164] Run: docker container inspect newest-cni-741831 --format={{.State.Status}}
	I1018 15:06:24.774109  352142 cli_runner.go:164] Run: docker exec newest-cni-741831 stat /var/lib/dpkg/alternatives/iptables
	I1018 15:06:24.826218  352142 oci.go:144] the created container "newest-cni-741831" has a running status.
	I1018 15:06:24.826247  352142 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa...
	I1018 15:06:25.591975  352142 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 15:06:25.618152  352142 cli_runner.go:164] Run: docker container inspect newest-cni-741831 --format={{.State.Status}}
	I1018 15:06:25.635630  352142 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 15:06:25.635652  352142 kic_runner.go:114] Args: [docker exec --privileged newest-cni-741831 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 15:06:25.683939  352142 cli_runner.go:164] Run: docker container inspect newest-cni-741831 --format={{.State.Status}}
	I1018 15:06:25.701188  352142 machine.go:93] provisionDockerMachine start ...
	I1018 15:06:25.701290  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:25.719680  352142 main.go:141] libmachine: Using SSH client type: native
	I1018 15:06:25.720029  352142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1018 15:06:25.720060  352142 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:06:25.854071  352142 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-741831
	
	I1018 15:06:25.854106  352142 ubuntu.go:182] provisioning hostname "newest-cni-741831"
	I1018 15:06:25.854160  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:25.872062  352142 main.go:141] libmachine: Using SSH client type: native
	I1018 15:06:25.872341  352142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1018 15:06:25.872365  352142 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-741831 && echo "newest-cni-741831" | sudo tee /etc/hostname
	I1018 15:06:26.015459  352142 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-741831
	
	I1018 15:06:26.015545  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:26.033766  352142 main.go:141] libmachine: Using SSH client type: native
	I1018 15:06:26.034053  352142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1018 15:06:26.034076  352142 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-741831' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-741831/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-741831' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:06:26.171352  352142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:06:26.171386  352142 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 15:06:26.171426  352142 ubuntu.go:190] setting up certificates
	I1018 15:06:26.171441  352142 provision.go:84] configureAuth start
	I1018 15:06:26.171503  352142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-741831
	I1018 15:06:26.190241  352142 provision.go:143] copyHostCerts
	I1018 15:06:26.190312  352142 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem, removing ...
	I1018 15:06:26.190325  352142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:06:26.190406  352142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 15:06:26.190521  352142 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem, removing ...
	I1018 15:06:26.190537  352142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:06:26.190580  352142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 15:06:26.190670  352142 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem, removing ...
	I1018 15:06:26.190681  352142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:06:26.190722  352142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 15:06:26.190798  352142 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.newest-cni-741831 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-741831]
	I1018 15:06:26.528284  352142 provision.go:177] copyRemoteCerts
	I1018 15:06:26.528341  352142 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:06:26.528375  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:26.546905  352142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:26.644596  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:06:26.665034  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 15:06:26.683543  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 15:06:26.701141  352142 provision.go:87] duration metric: took 529.670696ms to configureAuth
	I1018 15:06:26.701174  352142 ubuntu.go:206] setting minikube options for container-runtime
	I1018 15:06:26.701364  352142 config.go:182] Loaded profile config "newest-cni-741831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:26.701496  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:26.719555  352142 main.go:141] libmachine: Using SSH client type: native
	I1018 15:06:26.719765  352142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1018 15:06:26.719782  352142 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:06:26.970657  352142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:06:26.970682  352142 machine.go:96] duration metric: took 1.269467705s to provisionDockerMachine
	I1018 15:06:26.970692  352142 client.go:171] duration metric: took 7.791810529s to LocalClient.Create
	I1018 15:06:26.970712  352142 start.go:167] duration metric: took 7.791877225s to libmachine.API.Create "newest-cni-741831"
	I1018 15:06:26.970719  352142 start.go:293] postStartSetup for "newest-cni-741831" (driver="docker")
	I1018 15:06:26.970729  352142 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:06:26.970806  352142 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:06:26.970861  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:26.988335  352142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:27.087221  352142 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:06:27.090783  352142 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 15:06:27.090809  352142 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 15:06:27.090827  352142 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/addons for local assets ...
	I1018 15:06:27.090877  352142 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/files for local assets ...
	I1018 15:06:27.090972  352142 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem -> 931872.pem in /etc/ssl/certs
	I1018 15:06:27.091056  352142 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:06:27.098707  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:06:27.118867  352142 start.go:296] duration metric: took 148.132063ms for postStartSetup
	I1018 15:06:27.119258  352142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-741831
	I1018 15:06:27.138075  352142 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/config.json ...
	I1018 15:06:27.138321  352142 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 15:06:27.138366  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:27.155272  352142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:27.249460  352142 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 15:06:27.254563  352142 start.go:128] duration metric: took 8.077835013s to createHost
	I1018 15:06:27.254590  352142 start.go:83] releasing machines lock for "newest-cni-741831", held for 8.077985561s
	I1018 15:06:27.254660  352142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-741831
	I1018 15:06:27.273539  352142 ssh_runner.go:195] Run: cat /version.json
	I1018 15:06:27.273588  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:27.273628  352142 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:06:27.273693  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:27.291712  352142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:27.292133  352142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:27.438032  352142 ssh_runner.go:195] Run: systemctl --version
	I1018 15:06:27.444732  352142 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:06:27.480771  352142 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:06:27.485774  352142 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:06:27.485841  352142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:06:27.512064  352142 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 15:06:27.512089  352142 start.go:495] detecting cgroup driver to use...
	I1018 15:06:27.512126  352142 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 15:06:27.512175  352142 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:06:27.528665  352142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:06:27.541203  352142 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:06:27.541255  352142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:06:27.557700  352142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:06:27.577069  352142 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:06:27.661864  352142 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:06:27.751078  352142 docker.go:234] disabling docker service ...
	I1018 15:06:27.751149  352142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:06:27.771123  352142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:06:27.787019  352142 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:06:27.884416  352142 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:06:27.973822  352142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:06:27.986604  352142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:06:28.000991  352142 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 15:06:28.001058  352142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.011828  352142 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 15:06:28.011896  352142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.020931  352142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.030085  352142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.039092  352142 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:06:28.047412  352142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.055961  352142 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.069830  352142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.079271  352142 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:06:28.087557  352142 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:06:28.095726  352142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:06:28.204871  352142 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:06:28.308340  352142 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:06:28.308400  352142 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:06:28.312652  352142 start.go:563] Will wait 60s for crictl version
	I1018 15:06:28.312706  352142 ssh_runner.go:195] Run: which crictl
	I1018 15:06:28.316479  352142 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 15:06:28.342582  352142 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 15:06:28.342759  352142 ssh_runner.go:195] Run: crio --version
	I1018 15:06:28.371661  352142 ssh_runner.go:195] Run: crio --version
	I1018 15:06:28.404027  352142 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 15:06:28.405208  352142 cli_runner.go:164] Run: docker network inspect newest-cni-741831 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:06:28.422412  352142 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 15:06:28.426696  352142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:06:28.438922  352142 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 15:06:28.440159  352142 kubeadm.go:883] updating cluster {Name:newest-cni-741831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-741831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 15:06:28.440298  352142 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:06:28.440369  352142 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:06:28.471339  352142 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:06:28.471358  352142 crio.go:433] Images already preloaded, skipping extraction
	I1018 15:06:28.471399  352142 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:06:28.498054  352142 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:06:28.498077  352142 cache_images.go:85] Images are preloaded, skipping loading
	I1018 15:06:28.498085  352142 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1018 15:06:28.498165  352142 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-741831 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-741831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 15:06:28.498226  352142 ssh_runner.go:195] Run: crio config
	I1018 15:06:28.544284  352142 cni.go:84] Creating CNI manager for ""
	I1018 15:06:28.544310  352142 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:06:28.544334  352142 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 15:06:28.544364  352142 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-741831 NodeName:newest-cni-741831 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 15:06:28.544529  352142 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-741831"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 15:06:28.544591  352142 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 15:06:28.552919  352142 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 15:06:28.552987  352142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 15:06:28.560695  352142 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 15:06:28.573650  352142 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 15:06:28.589169  352142 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1018 15:06:28.602324  352142 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 15:06:28.606123  352142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:06:28.616292  352142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:06:28.702657  352142 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:06:28.728867  352142 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831 for IP: 192.168.94.2
	I1018 15:06:28.728898  352142 certs.go:195] generating shared ca certs ...
	I1018 15:06:28.728944  352142 certs.go:227] acquiring lock for ca certs: {Name:mk6d3b4e44856f498157352ffe8bb89d2c7a3998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:28.729163  352142 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key
	I1018 15:06:28.729240  352142 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key
	I1018 15:06:28.729254  352142 certs.go:257] generating profile certs ...
	I1018 15:06:28.729414  352142 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/client.key
	I1018 15:06:28.729451  352142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/client.crt with IP's: []
	I1018 15:06:28.792470  352142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/client.crt ...
	I1018 15:06:28.792500  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/client.crt: {Name:mke8e96a052b8eb8b398b73425f8e5ee1007513d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:28.792716  352142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/client.key ...
	I1018 15:06:28.792733  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/client.key: {Name:mk9c5cc06cccf0052c525e1e52278d7f0300c686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:28.792854  352142 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.key.5b00fbe4
	I1018 15:06:28.792878  352142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.crt.5b00fbe4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	W1018 15:06:24.565716  347067 node_ready.go:57] node "default-k8s-diff-port-489104" has "Ready":"False" status (will retry)
	W1018 15:06:27.064596  347067 node_ready.go:57] node "default-k8s-diff-port-489104" has "Ready":"False" status (will retry)
	I1018 15:06:28.065074  347067 node_ready.go:49] node "default-k8s-diff-port-489104" is "Ready"
	I1018 15:06:28.065102  347067 node_ready.go:38] duration metric: took 12.003457865s for node "default-k8s-diff-port-489104" to be "Ready" ...
	I1018 15:06:28.065119  347067 api_server.go:52] waiting for apiserver process to appear ...
	I1018 15:06:28.065157  347067 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:06:28.076701  347067 api_server.go:72] duration metric: took 12.349786258s to wait for apiserver process to appear ...
	I1018 15:06:28.076733  347067 api_server.go:88] waiting for apiserver healthz status ...
	I1018 15:06:28.076752  347067 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1018 15:06:28.081593  347067 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1018 15:06:28.082688  347067 api_server.go:141] control plane version: v1.34.1
	I1018 15:06:28.082715  347067 api_server.go:131] duration metric: took 5.974362ms to wait for apiserver health ...
	I1018 15:06:28.082726  347067 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:06:28.086013  347067 system_pods.go:59] 8 kube-system pods found
	I1018 15:06:28.086058  347067 system_pods.go:61] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:06:28.086070  347067 system_pods.go:61] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running
	I1018 15:06:28.086083  347067 system_pods.go:61] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:06:28.086088  347067 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running
	I1018 15:06:28.086097  347067 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running
	I1018 15:06:28.086103  347067 system_pods.go:61] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:06:28.086110  347067 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running
	I1018 15:06:28.086118  347067 system_pods.go:61] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:06:28.086130  347067 system_pods.go:74] duration metric: took 3.396495ms to wait for pod list to return data ...
	I1018 15:06:28.086142  347067 default_sa.go:34] waiting for default service account to be created ...
	I1018 15:06:28.088698  347067 default_sa.go:45] found service account: "default"
	I1018 15:06:28.088719  347067 default_sa.go:55] duration metric: took 2.569918ms for default service account to be created ...
	I1018 15:06:28.088729  347067 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 15:06:28.091501  347067 system_pods.go:86] 8 kube-system pods found
	I1018 15:06:28.091528  347067 system_pods.go:89] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:06:28.091534  347067 system_pods.go:89] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running
	I1018 15:06:28.091540  347067 system_pods.go:89] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:06:28.091543  347067 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running
	I1018 15:06:28.091547  347067 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running
	I1018 15:06:28.091550  347067 system_pods.go:89] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:06:28.091554  347067 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running
	I1018 15:06:28.091558  347067 system_pods.go:89] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:06:28.091576  347067 retry.go:31] will retry after 228.914741ms: missing components: kube-dns
	I1018 15:06:28.325222  347067 system_pods.go:86] 8 kube-system pods found
	I1018 15:06:28.325259  347067 system_pods.go:89] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:06:28.325267  347067 system_pods.go:89] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running
	I1018 15:06:28.325275  347067 system_pods.go:89] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:06:28.325281  347067 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running
	I1018 15:06:28.325287  347067 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running
	I1018 15:06:28.325292  347067 system_pods.go:89] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:06:28.325297  347067 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running
	I1018 15:06:28.325304  347067 system_pods.go:89] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:06:28.325327  347067 retry.go:31] will retry after 353.361454ms: missing components: kube-dns
	I1018 15:06:28.682887  347067 system_pods.go:86] 8 kube-system pods found
	I1018 15:06:28.682948  347067 system_pods.go:89] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:06:28.682958  347067 system_pods.go:89] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running
	I1018 15:06:28.682966  347067 system_pods.go:89] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:06:28.682974  347067 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running
	I1018 15:06:28.682981  347067 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running
	I1018 15:06:28.682991  347067 system_pods.go:89] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:06:28.682997  347067 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running
	I1018 15:06:28.683008  347067 system_pods.go:89] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:06:28.683029  347067 retry.go:31] will retry after 298.181886ms: missing components: kube-dns
	I1018 15:06:28.986254  347067 system_pods.go:86] 8 kube-system pods found
	I1018 15:06:28.986282  347067 system_pods.go:89] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Running
	I1018 15:06:28.986288  347067 system_pods.go:89] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running
	I1018 15:06:28.986292  347067 system_pods.go:89] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:06:28.986296  347067 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running
	I1018 15:06:28.986299  347067 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running
	I1018 15:06:28.986302  347067 system_pods.go:89] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:06:28.986305  347067 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running
	I1018 15:06:28.986308  347067 system_pods.go:89] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Running
	I1018 15:06:28.986316  347067 system_pods.go:126] duration metric: took 897.58086ms to wait for k8s-apps to be running ...
	I1018 15:06:28.986323  347067 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 15:06:28.986366  347067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:06:28.999817  347067 system_svc.go:56] duration metric: took 13.480567ms WaitForService to wait for kubelet
	I1018 15:06:28.999843  347067 kubeadm.go:586] duration metric: took 13.272933961s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:06:28.999865  347067 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:06:29.003008  347067 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 15:06:29.003035  347067 node_conditions.go:123] node cpu capacity is 8
	I1018 15:06:29.003050  347067 node_conditions.go:105] duration metric: took 3.181093ms to run NodePressure ...
	I1018 15:06:29.003062  347067 start.go:241] waiting for startup goroutines ...
	I1018 15:06:29.003069  347067 start.go:246] waiting for cluster config update ...
	I1018 15:06:29.003089  347067 start.go:255] writing updated cluster config ...
	I1018 15:06:29.003370  347067 ssh_runner.go:195] Run: rm -f paused
	I1018 15:06:29.007398  347067 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:06:29.011225  347067 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dtjgd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.015433  347067 pod_ready.go:94] pod "coredns-66bc5c9577-dtjgd" is "Ready"
	I1018 15:06:29.015452  347067 pod_ready.go:86] duration metric: took 4.205314ms for pod "coredns-66bc5c9577-dtjgd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.017638  347067 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.021381  347067 pod_ready.go:94] pod "etcd-default-k8s-diff-port-489104" is "Ready"
	I1018 15:06:29.021399  347067 pod_ready.go:86] duration metric: took 3.738445ms for pod "etcd-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.023308  347067 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.026567  347067 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-489104" is "Ready"
	I1018 15:06:29.026587  347067 pod_ready.go:86] duration metric: took 3.257885ms for pod "kube-apiserver-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.028296  347067 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.063010  352142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.crt.5b00fbe4 ...
	I1018 15:06:29.063038  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.crt.5b00fbe4: {Name:mk3d0668ddae7d28b699df3536f8e4c4c7dbf460 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:29.063212  352142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.key.5b00fbe4 ...
	I1018 15:06:29.063226  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.key.5b00fbe4: {Name:mkd60891ad06419625ec1cb1227353159cfb6546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:29.063304  352142 certs.go:382] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.crt.5b00fbe4 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.crt
	I1018 15:06:29.063375  352142 certs.go:386] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.key.5b00fbe4 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.key
	I1018 15:06:29.063429  352142 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.key
	I1018 15:06:29.063450  352142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.crt with IP's: []
	I1018 15:06:29.291547  352142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.crt ...
	I1018 15:06:29.291575  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.crt: {Name:mk9053fa1d59e516145d535ccf928a7a4620007b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:29.291747  352142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.key ...
	I1018 15:06:29.291760  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.key: {Name:mk718982611c021d2ca690df47a58e465ee8a410 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:29.291962  352142 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem (1338 bytes)
	W1018 15:06:29.292002  352142 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187_empty.pem, impossibly tiny 0 bytes
	I1018 15:06:29.292011  352142 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 15:06:29.292032  352142 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem (1082 bytes)
	I1018 15:06:29.292057  352142 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem (1123 bytes)
	I1018 15:06:29.292078  352142 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem (1675 bytes)
	I1018 15:06:29.292125  352142 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:06:29.292759  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 15:06:29.311389  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 15:06:29.329272  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 15:06:29.346654  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 15:06:29.364002  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 15:06:29.382237  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 15:06:29.400409  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 15:06:29.420478  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 15:06:29.440286  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 15:06:29.460529  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem --> /usr/share/ca-certificates/93187.pem (1338 bytes)
	I1018 15:06:29.478328  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /usr/share/ca-certificates/931872.pem (1708 bytes)
	I1018 15:06:29.495696  352142 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 15:06:29.508533  352142 ssh_runner.go:195] Run: openssl version
	I1018 15:06:29.514752  352142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 15:06:29.523282  352142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:06:29.527456  352142 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:06:29.527507  352142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:06:29.562619  352142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 15:06:29.573830  352142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93187.pem && ln -fs /usr/share/ca-certificates/93187.pem /etc/ssl/certs/93187.pem"
	I1018 15:06:29.582909  352142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93187.pem
	I1018 15:06:29.587243  352142 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 14:26 /usr/share/ca-certificates/93187.pem
	I1018 15:06:29.587318  352142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93187.pem
	I1018 15:06:29.624088  352142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93187.pem /etc/ssl/certs/51391683.0"
	I1018 15:06:29.633601  352142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/931872.pem && ln -fs /usr/share/ca-certificates/931872.pem /etc/ssl/certs/931872.pem"
	I1018 15:06:29.642526  352142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/931872.pem
	I1018 15:06:29.646524  352142 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 14:26 /usr/share/ca-certificates/931872.pem
	I1018 15:06:29.646586  352142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/931872.pem
	I1018 15:06:29.681559  352142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/931872.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 15:06:29.690980  352142 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 15:06:29.694862  352142 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 15:06:29.694929  352142 kubeadm.go:400] StartCluster: {Name:newest-cni-741831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-741831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:06:29.695023  352142 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 15:06:29.695110  352142 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 15:06:29.723568  352142 cri.go:89] found id: ""
	I1018 15:06:29.723638  352142 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 15:06:29.731906  352142 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 15:06:29.740249  352142 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 15:06:29.740294  352142 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 15:06:29.748230  352142 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 15:06:29.748252  352142 kubeadm.go:157] found existing configuration files:
	
	I1018 15:06:29.748291  352142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 15:06:29.756351  352142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 15:06:29.756404  352142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 15:06:29.764376  352142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 15:06:29.772207  352142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 15:06:29.772260  352142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 15:06:29.779898  352142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 15:06:29.787823  352142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 15:06:29.787891  352142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 15:06:29.795735  352142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 15:06:29.803573  352142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 15:06:29.803624  352142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 15:06:29.811038  352142 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 15:06:29.850187  352142 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 15:06:29.850272  352142 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 15:06:29.871064  352142 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 15:06:29.871172  352142 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 15:06:29.871247  352142 kubeadm.go:318] OS: Linux
	I1018 15:06:29.871372  352142 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 15:06:29.871447  352142 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 15:06:29.871518  352142 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 15:06:29.871595  352142 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 15:06:29.871671  352142 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 15:06:29.871761  352142 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 15:06:29.871839  352142 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 15:06:29.871898  352142 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 15:06:29.935613  352142 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 15:06:29.935785  352142 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 15:06:29.935942  352142 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 15:06:29.943662  352142 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 15:06:29.412888  347067 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-489104" is "Ready"
	I1018 15:06:29.412928  347067 pod_ready.go:86] duration metric: took 384.611351ms for pod "kube-controller-manager-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.612847  347067 pod_ready.go:83] waiting for pod "kube-proxy-7wbfs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:30.011983  347067 pod_ready.go:94] pod "kube-proxy-7wbfs" is "Ready"
	I1018 15:06:30.012012  347067 pod_ready.go:86] duration metric: took 399.134641ms for pod "kube-proxy-7wbfs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:30.211949  347067 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:30.612493  347067 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-489104" is "Ready"
	I1018 15:06:30.612525  347067 pod_ready.go:86] duration metric: took 400.540545ms for pod "kube-scheduler-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:30.612538  347067 pod_ready.go:40] duration metric: took 1.60510698s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:06:30.661514  347067 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:06:30.663459  347067 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-489104" cluster and "default" namespace by default
	I1018 15:06:29.948047  352142 out.go:252]   - Generating certificates and keys ...
	I1018 15:06:29.948147  352142 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 15:06:29.948229  352142 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 15:06:30.250963  352142 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 15:06:30.366731  352142 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 15:06:30.535222  352142 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 15:06:30.853257  352142 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 15:06:31.046320  352142 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 15:06:31.046555  352142 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-741831] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 15:06:31.171804  352142 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 15:06:31.172019  352142 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-741831] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 15:06:32.275618  352142 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 15:06:33.097457  352142 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 15:06:33.197652  352142 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 15:06:33.197773  352142 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 15:06:33.308356  352142 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 15:06:33.547102  352142 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 15:06:33.677173  352142 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 15:06:34.208214  352142 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 15:06:34.302781  352142 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 15:06:34.303476  352142 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 15:06:34.308455  352142 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 15:06:34.309825  352142 out.go:252]   - Booting up control plane ...
	I1018 15:06:34.309970  352142 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 15:06:34.310108  352142 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 15:06:34.311099  352142 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 15:06:34.325184  352142 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 15:06:34.325348  352142 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 15:06:34.331880  352142 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 15:06:34.332079  352142 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 15:06:34.332147  352142 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 15:06:34.435219  352142 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 15:06:34.435411  352142 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 15:06:35.436962  352142 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001878752s
	I1018 15:06:35.440026  352142 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 15:06:35.440152  352142 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1018 15:06:35.440300  352142 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 15:06:35.440434  352142 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 15:06:36.457261  352142 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.017092353s
	I1018 15:06:37.533342  352142 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.093263869s
	
	
	==> CRI-O <==
	Oct 18 15:06:03 no-preload-165275 crio[556]: time="2025-10-18T15:06:03.617503198Z" level=info msg="Started container" PID=1724 containerID=1ea0d2250d60235dd8ab7570ad302ef35617e4a9b3276fb327f6980e483073ab description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468/dashboard-metrics-scraper id=66e7306e-b51b-4d98-86f1-6c7c8c1ca055 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9ace59796e3ac4dc8476d8b04a9ba2ecd161b3698c889f248a8e3aa87a7c9650
	Oct 18 15:06:03 no-preload-165275 crio[556]: time="2025-10-18T15:06:03.686817508Z" level=info msg="Removing container: 609c9df9c9c317c944ad79690ed3684ddc5526e73335deadb8ac7e06f6710b54" id=ab3db32b-745c-4add-aafa-bcd13b2fd219 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:06:03 no-preload-165275 crio[556]: time="2025-10-18T15:06:03.696817358Z" level=info msg="Removed container 609c9df9c9c317c944ad79690ed3684ddc5526e73335deadb8ac7e06f6710b54: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468/dashboard-metrics-scraper" id=ab3db32b-745c-4add-aafa-bcd13b2fd219 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.715177431Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b74bda98-017c-45dd-be22-c8de3aab835f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.716139221Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1c156b98-a5ea-4e41-8df1-e5163d874fc8 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.717258816Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=42ac826d-d9e9-4df6-b21c-7c692742b77d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.717532767Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.724438938Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.724641941Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ce8ae44cd8968e8cd02b513f88f15ce63817d634151755feec57ae2aa624ba99/merged/etc/passwd: no such file or directory"
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.724683993Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ce8ae44cd8968e8cd02b513f88f15ce63817d634151755feec57ae2aa624ba99/merged/etc/group: no such file or directory"
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.725049904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.756537622Z" level=info msg="Created container bd99f521b104d52b1eeda1acab34e9e643be51bbb7965e0beed26ac85243b990: kube-system/storage-provisioner/storage-provisioner" id=42ac826d-d9e9-4df6-b21c-7c692742b77d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.757252809Z" level=info msg="Starting container: bd99f521b104d52b1eeda1acab34e9e643be51bbb7965e0beed26ac85243b990" id=c06e7d88-d508-4550-a419-c8aa4e84921a name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:06:13 no-preload-165275 crio[556]: time="2025-10-18T15:06:13.759359373Z" level=info msg="Started container" PID=1738 containerID=bd99f521b104d52b1eeda1acab34e9e643be51bbb7965e0beed26ac85243b990 description=kube-system/storage-provisioner/storage-provisioner id=c06e7d88-d508-4550-a419-c8aa4e84921a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b6604b1d1a7ac7b7bcc760ac60f0458edc3f2d0e5d0f4769caa10b24ce55e04c
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.572908097Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fbcd610f-78c2-4e56-a194-f8c80fbbefb1 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.573877584Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bb2d95b2-a79d-43c1-b8e9-5a64e26b1306 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.575021268Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468/dashboard-metrics-scraper" id=738120ce-d509-4f0a-828f-bbe5a6f0c97a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.575293944Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.580579731Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.581089922Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.611615842Z" level=info msg="Created container 75a93b11eb42f4e75eb477490277c486c91ad8b2a0193107139e8ee97e00ca8c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468/dashboard-metrics-scraper" id=738120ce-d509-4f0a-828f-bbe5a6f0c97a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.612350635Z" level=info msg="Starting container: 75a93b11eb42f4e75eb477490277c486c91ad8b2a0193107139e8ee97e00ca8c" id=5f9f0836-5e3b-49be-b183-874868c9d0f4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.614803871Z" level=info msg="Started container" PID=1774 containerID=75a93b11eb42f4e75eb477490277c486c91ad8b2a0193107139e8ee97e00ca8c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468/dashboard-metrics-scraper id=5f9f0836-5e3b-49be-b183-874868c9d0f4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9ace59796e3ac4dc8476d8b04a9ba2ecd161b3698c889f248a8e3aa87a7c9650
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.750638037Z" level=info msg="Removing container: 1ea0d2250d60235dd8ab7570ad302ef35617e4a9b3276fb327f6980e483073ab" id=14bb2112-49eb-46ab-9dfb-286774147c8c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:06:24 no-preload-165275 crio[556]: time="2025-10-18T15:06:24.764121612Z" level=info msg="Removed container 1ea0d2250d60235dd8ab7570ad302ef35617e4a9b3276fb327f6980e483073ab: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468/dashboard-metrics-scraper" id=14bb2112-49eb-46ab-9dfb-286774147c8c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	75a93b11eb42f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago       Exited              dashboard-metrics-scraper   3                   9ace59796e3ac       dashboard-metrics-scraper-6ffb444bf9-vd468   kubernetes-dashboard
	bd99f521b104d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   b6604b1d1a7ac       storage-provisioner                          kube-system
	40d37635759ff       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago       Running             kubernetes-dashboard        0                   e1f42a7afcec3       kubernetes-dashboard-855c9754f9-4l599        kubernetes-dashboard
	b7590848e3e2c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   05c31e78396dd       busybox                                      default
	d8cad0e51da9b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           57 seconds ago       Running             coredns                     0                   c24acd414200b       coredns-66bc5c9577-cmgb8                     kube-system
	a61bf08741c20       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   fa8b2b1d4d966       kindnet-8c5w4                                kube-system
	f24643699519e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   b6604b1d1a7ac       storage-provisioner                          kube-system
	d8643b50024f1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           57 seconds ago       Running             kube-proxy                  0                   c37dbb78327f2       kube-proxy-84fhl                             kube-system
	d37cf270acf4c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   7141a9e314ca6       kube-controller-manager-no-preload-165275    kube-system
	ce5891388244a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   c835c15571cb9       kube-apiserver-no-preload-165275             kube-system
	c1d28d4d24c3e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   9907de81fe5e2       kube-scheduler-no-preload-165275             kube-system
	3e2c583673b99       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   9e5d9a5eb49d5       etcd-no-preload-165275                       kube-system
	
	
	==> coredns [d8cad0e51da9ba1a5945123306231034e96864d53528c3d0398f4332e290fd40] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44093 - 63494 "HINFO IN 7903083053169130360.1446552966389043242. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.300822671s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-165275
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-165275
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=no-preload-165275
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_04_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:04:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-165275
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:06:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:06:33 +0000   Sat, 18 Oct 2025 15:04:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:06:33 +0000   Sat, 18 Oct 2025 15:04:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:06:33 +0000   Sat, 18 Oct 2025 15:04:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 15:06:33 +0000   Sat, 18 Oct 2025 15:05:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-165275
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                6d727dff-cef3-4b2d-bb6c-d6d48f30b9ab
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-cmgb8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-no-preload-165275                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-8c5w4                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-no-preload-165275              250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-no-preload-165275     200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-84fhl                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-no-preload-165275              100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vd468    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4l599         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  Starting                 57s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node no-preload-165275 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node no-preload-165275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x8 over 2m2s)  kubelet          Node no-preload-165275 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node no-preload-165275 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node no-preload-165275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node no-preload-165275 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           113s                 node-controller  Node no-preload-165275 event: Registered Node no-preload-165275 in Controller
	  Normal  NodeReady                98s                  kubelet          Node no-preload-165275 status is now: NodeReady
	  Normal  Starting                 61s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)    kubelet          Node no-preload-165275 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)    kubelet          Node no-preload-165275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)    kubelet          Node no-preload-165275 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                  node-controller  Node no-preload-165275 event: Registered Node no-preload-165275 in Controller
	
	
	==> dmesg <==
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [3e2c583673b99348bde570e54f1913de407877ce7969439954326ffcf6f4fc31] <==
	{"level":"warn","ts":"2025-10-18T15:05:41.393485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.401940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.410780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.430842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.436810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.444863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.469163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.486527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.497157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.503971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.515170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.523659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.532659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.540601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.549522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.557779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.566539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.574778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.583205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.597574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.605580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.614017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:05:41.680175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39960","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T15:05:55.502064Z","caller":"traceutil/trace.go:172","msg":"trace[1869035317] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"127.523053ms","start":"2025-10-18T15:05:55.374514Z","end":"2025-10-18T15:05:55.502037Z","steps":["trace[1869035317] 'process raft request'  (duration: 114.388633ms)","trace[1869035317] 'compare'  (duration: 12.953597ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T15:05:55.725104Z","caller":"traceutil/trace.go:172","msg":"trace[253110695] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"143.721998ms","start":"2025-10-18T15:05:55.581348Z","end":"2025-10-18T15:05:55.725070Z","steps":["trace[253110695] 'process raft request'  (duration: 125.670218ms)","trace[253110695] 'compare'  (duration: 17.946928ms)"],"step_count":2}
	
	
	==> kernel <==
	 15:06:40 up  2:49,  0 user,  load average: 2.88, 2.80, 1.95
	Linux no-preload-165275 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a61bf08741c2071de9d41f7d9a959c9d0202f13a22c5d7343ac7bb3c3b93e5e2] <==
	I1018 15:05:43.216289       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 15:05:43.216585       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 15:05:43.216748       1 main.go:148] setting mtu 1500 for CNI 
	I1018 15:05:43.216771       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 15:05:43.216795       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T15:05:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 15:05:43.511886       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 15:05:43.614322       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 15:05:43.614346       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 15:05:43.614558       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 15:05:44.015225       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 15:05:44.015256       1 metrics.go:72] Registering metrics
	I1018 15:05:44.015314       1 controller.go:711] "Syncing nftables rules"
	I1018 15:05:53.427880       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 15:05:53.427983       1 main.go:301] handling current node
	I1018 15:06:03.430402       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 15:06:03.430456       1 main.go:301] handling current node
	I1018 15:06:13.426076       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 15:06:13.426107       1 main.go:301] handling current node
	I1018 15:06:23.426109       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 15:06:23.426141       1 main.go:301] handling current node
	I1018 15:06:33.426955       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 15:06:33.427008       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ce5891388244aaa439d0521f1c59f74520a5be8cfe55bae6fec434a5125ea972] <==
	I1018 15:05:42.296235       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 15:05:42.296330       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 15:05:42.300156       1 aggregator.go:171] initial CRD sync complete...
	I1018 15:05:42.300173       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 15:05:42.300181       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 15:05:42.300187       1 cache.go:39] Caches are synced for autoregister controller
	I1018 15:05:42.296651       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 15:05:42.296667       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 15:05:42.300377       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 15:05:42.296293       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 15:05:42.311961       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 15:05:42.350204       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 15:05:42.350777       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:05:42.381850       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 15:05:42.553258       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 15:05:42.691316       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 15:05:42.730209       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 15:05:42.753454       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:05:42.762022       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:05:42.812181       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.171.236"}
	I1018 15:05:42.827996       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.214.94"}
	I1018 15:05:43.197383       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:05:45.609210       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 15:05:45.705249       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 15:05:45.756054       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d37cf270acf4cdb482c3d7fdb5fa2e8ecdf544a1b1172db005a424e0b482c119] <==
	I1018 15:05:45.192307       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 15:05:45.194587       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 15:05:45.196838       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 15:05:45.196955       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 15:05:45.197065       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-165275"
	I1018 15:05:45.197213       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 15:05:45.198155       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 15:05:45.200489       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 15:05:45.202780       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 15:05:45.202812       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 15:05:45.202893       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 15:05:45.203025       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 15:05:45.202941       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 15:05:45.203900       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 15:05:45.204011       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 15:05:45.204320       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 15:05:45.206424       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 15:05:45.207652       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 15:05:45.208734       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 15:05:45.208806       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 15:05:45.208854       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 15:05:45.208861       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 15:05:45.208867       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 15:05:45.208880       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 15:05:45.232108       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [d8643b50024f13026b83ef70e0a7a12d1d5fc9a309e6bcd49fa11236a78579ff] <==
	I1018 15:05:43.015232       1 server_linux.go:53] "Using iptables proxy"
	I1018 15:05:43.078262       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 15:05:43.178830       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 15:05:43.178871       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 15:05:43.179008       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 15:05:43.199405       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 15:05:43.199472       1 server_linux.go:132] "Using iptables Proxier"
	I1018 15:05:43.206159       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 15:05:43.206663       1 server.go:527] "Version info" version="v1.34.1"
	I1018 15:05:43.206701       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:05:43.211863       1 config.go:200] "Starting service config controller"
	I1018 15:05:43.212082       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 15:05:43.212255       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 15:05:43.212337       1 config.go:106] "Starting endpoint slice config controller"
	I1018 15:05:43.212407       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 15:05:43.212012       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 15:05:43.212306       1 config.go:309] "Starting node config controller"
	I1018 15:05:43.212966       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 15:05:43.213026       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 15:05:43.312479       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 15:05:43.312768       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 15:05:43.312768       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c1d28d4d24c3ece98b690ada9bd56a5d7ebdd925b9e2320e8f7d9f1b62f77b34] <==
	I1018 15:05:40.716722       1 serving.go:386] Generated self-signed cert in-memory
	W1018 15:05:42.228926       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 15:05:42.231089       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 15:05:42.231192       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 15:05:42.231227       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 15:05:42.290065       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 15:05:42.290161       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:05:42.293215       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 15:05:42.293341       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:05:42.296743       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:05:42.293371       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 15:05:42.396927       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 15:05:49 no-preload-165275 kubelet[702]: I1018 15:05:49.637616     702 scope.go:117] "RemoveContainer" containerID="609c9df9c9c317c944ad79690ed3684ddc5526e73335deadb8ac7e06f6710b54"
	Oct 18 15:05:49 no-preload-165275 kubelet[702]: E1018 15:05:49.637778     702 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vd468_kubernetes-dashboard(d57570e6-aad1-4c12-a379-6f8f0e8c277c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468" podUID="d57570e6-aad1-4c12-a379-6f8f0e8c277c"
	Oct 18 15:05:50 no-preload-165275 kubelet[702]: I1018 15:05:50.646759     702 scope.go:117] "RemoveContainer" containerID="609c9df9c9c317c944ad79690ed3684ddc5526e73335deadb8ac7e06f6710b54"
	Oct 18 15:05:50 no-preload-165275 kubelet[702]: E1018 15:05:50.649948     702 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vd468_kubernetes-dashboard(d57570e6-aad1-4c12-a379-6f8f0e8c277c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468" podUID="d57570e6-aad1-4c12-a379-6f8f0e8c277c"
	Oct 18 15:05:51 no-preload-165275 kubelet[702]: I1018 15:05:51.649410     702 scope.go:117] "RemoveContainer" containerID="609c9df9c9c317c944ad79690ed3684ddc5526e73335deadb8ac7e06f6710b54"
	Oct 18 15:05:51 no-preload-165275 kubelet[702]: E1018 15:05:51.649644     702 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vd468_kubernetes-dashboard(d57570e6-aad1-4c12-a379-6f8f0e8c277c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468" podUID="d57570e6-aad1-4c12-a379-6f8f0e8c277c"
	Oct 18 15:05:51 no-preload-165275 kubelet[702]: I1018 15:05:51.955934     702 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 15:05:55 no-preload-165275 kubelet[702]: I1018 15:05:55.365367     702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4l599" podStartSLOduration=3.695280231 podStartE2EDuration="10.365342384s" podCreationTimestamp="2025-10-18 15:05:45 +0000 UTC" firstStartedPulling="2025-10-18 15:05:46.178371919 +0000 UTC m=+6.712601151" lastFinishedPulling="2025-10-18 15:05:52.848434076 +0000 UTC m=+13.382663304" observedRunningTime="2025-10-18 15:05:53.718286731 +0000 UTC m=+14.252515973" watchObservedRunningTime="2025-10-18 15:05:55.365342384 +0000 UTC m=+15.899571626"
	Oct 18 15:06:03 no-preload-165275 kubelet[702]: I1018 15:06:03.572127     702 scope.go:117] "RemoveContainer" containerID="609c9df9c9c317c944ad79690ed3684ddc5526e73335deadb8ac7e06f6710b54"
	Oct 18 15:06:03 no-preload-165275 kubelet[702]: I1018 15:06:03.685510     702 scope.go:117] "RemoveContainer" containerID="609c9df9c9c317c944ad79690ed3684ddc5526e73335deadb8ac7e06f6710b54"
	Oct 18 15:06:03 no-preload-165275 kubelet[702]: I1018 15:06:03.685740     702 scope.go:117] "RemoveContainer" containerID="1ea0d2250d60235dd8ab7570ad302ef35617e4a9b3276fb327f6980e483073ab"
	Oct 18 15:06:03 no-preload-165275 kubelet[702]: E1018 15:06:03.685976     702 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vd468_kubernetes-dashboard(d57570e6-aad1-4c12-a379-6f8f0e8c277c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468" podUID="d57570e6-aad1-4c12-a379-6f8f0e8c277c"
	Oct 18 15:06:10 no-preload-165275 kubelet[702]: I1018 15:06:10.163582     702 scope.go:117] "RemoveContainer" containerID="1ea0d2250d60235dd8ab7570ad302ef35617e4a9b3276fb327f6980e483073ab"
	Oct 18 15:06:10 no-preload-165275 kubelet[702]: E1018 15:06:10.163765     702 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vd468_kubernetes-dashboard(d57570e6-aad1-4c12-a379-6f8f0e8c277c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468" podUID="d57570e6-aad1-4c12-a379-6f8f0e8c277c"
	Oct 18 15:06:13 no-preload-165275 kubelet[702]: I1018 15:06:13.714658     702 scope.go:117] "RemoveContainer" containerID="f24643699519eede5987d2db64babc61ecbb2bc1fbe89e7d24e540599e9fda2c"
	Oct 18 15:06:24 no-preload-165275 kubelet[702]: I1018 15:06:24.572333     702 scope.go:117] "RemoveContainer" containerID="1ea0d2250d60235dd8ab7570ad302ef35617e4a9b3276fb327f6980e483073ab"
	Oct 18 15:06:24 no-preload-165275 kubelet[702]: I1018 15:06:24.748372     702 scope.go:117] "RemoveContainer" containerID="1ea0d2250d60235dd8ab7570ad302ef35617e4a9b3276fb327f6980e483073ab"
	Oct 18 15:06:24 no-preload-165275 kubelet[702]: I1018 15:06:24.749043     702 scope.go:117] "RemoveContainer" containerID="75a93b11eb42f4e75eb477490277c486c91ad8b2a0193107139e8ee97e00ca8c"
	Oct 18 15:06:24 no-preload-165275 kubelet[702]: E1018 15:06:24.749285     702 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vd468_kubernetes-dashboard(d57570e6-aad1-4c12-a379-6f8f0e8c277c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468" podUID="d57570e6-aad1-4c12-a379-6f8f0e8c277c"
	Oct 18 15:06:30 no-preload-165275 kubelet[702]: I1018 15:06:30.163368     702 scope.go:117] "RemoveContainer" containerID="75a93b11eb42f4e75eb477490277c486c91ad8b2a0193107139e8ee97e00ca8c"
	Oct 18 15:06:30 no-preload-165275 kubelet[702]: E1018 15:06:30.163589     702 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vd468_kubernetes-dashboard(d57570e6-aad1-4c12-a379-6f8f0e8c277c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vd468" podUID="d57570e6-aad1-4c12-a379-6f8f0e8c277c"
	Oct 18 15:06:35 no-preload-165275 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 15:06:35 no-preload-165275 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 15:06:35 no-preload-165275 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 15:06:35 no-preload-165275 systemd[1]: kubelet.service: Consumed 1.776s CPU time.
	
	
	==> kubernetes-dashboard [40d37635759ffb9d9f2cb9a03f0e608336ed376a7646906d2e3102badf4b2204] <==
	2025/10/18 15:05:52 Starting overwatch
	2025/10/18 15:05:52 Using namespace: kubernetes-dashboard
	2025/10/18 15:05:52 Using in-cluster config to connect to apiserver
	2025/10/18 15:05:52 Using secret token for csrf signing
	2025/10/18 15:05:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 15:05:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 15:05:52 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 15:05:52 Generating JWE encryption key
	2025/10/18 15:05:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 15:05:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 15:05:53 Initializing JWE encryption key from synchronized object
	2025/10/18 15:05:53 Creating in-cluster Sidecar client
	2025/10/18 15:05:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 15:05:53 Serving insecurely on HTTP port: 9090
	2025/10/18 15:06:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [bd99f521b104d52b1eeda1acab34e9e643be51bbb7965e0beed26ac85243b990] <==
	I1018 15:06:13.772968       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 15:06:13.781902       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 15:06:13.781967       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 15:06:13.784444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:17.239820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:21.500414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:25.099281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:28.153434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:31.176691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:31.182023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 15:06:31.182194       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 15:06:31.182319       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"297bdaca-635d-490e-89a8-cdf06fe2f03a", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-165275_fb90670f-c83b-45c4-ac94-af6eddde5e55 became leader
	I1018 15:06:31.182368       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-165275_fb90670f-c83b-45c4-ac94-af6eddde5e55!
	W1018 15:06:31.184852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:31.187901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 15:06:31.283277       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-165275_fb90670f-c83b-45c4-ac94-af6eddde5e55!
	W1018 15:06:33.191621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:33.198091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:35.202334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:35.208172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:37.211418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:37.215475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:39.218597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:39.223296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f24643699519eede5987d2db64babc61ecbb2bc1fbe89e7d24e540599e9fda2c] <==
	I1018 15:05:42.994160       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 15:06:12.997445       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-165275 -n no-preload-165275
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-165275 -n no-preload-165275: exit status 2 (388.469966ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-165275 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-489104 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-489104 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (267.471066ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:06:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-489104 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-489104 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-489104 describe deploy/metrics-server -n kube-system: exit status 1 (63.23271ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-489104 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-489104
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-489104:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58",
	        "Created": "2025-10-18T15:05:55.975362915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347847,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T15:05:56.027177612Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58/hostname",
	        "HostsPath": "/var/lib/docker/containers/028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58/hosts",
	        "LogPath": "/var/lib/docker/containers/028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58/028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58-json.log",
	        "Name": "/default-k8s-diff-port-489104",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-489104:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-489104",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58",
	                "LowerDir": "/var/lib/docker/overlay2/d41b3b6686e5c33cdc02253cbb7729e5677ff70aca495d84cb9a64aec8ce6497-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d41b3b6686e5c33cdc02253cbb7729e5677ff70aca495d84cb9a64aec8ce6497/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d41b3b6686e5c33cdc02253cbb7729e5677ff70aca495d84cb9a64aec8ce6497/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d41b3b6686e5c33cdc02253cbb7729e5677ff70aca495d84cb9a64aec8ce6497/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-489104",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-489104/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-489104",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-489104",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-489104",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ca3d0762ab4e05ef5e936b3edbc4e8a3a2c68fe0ca47fcac19b33c0d4706b5f5",
	            "SandboxKey": "/var/run/docker/netns/ca3d0762ab4e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-489104": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:ca:b8:23:70:ae",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc1ae438e1a0053de9cf1d93573ce1c4498bc18884eb76fa43ba91a693a5bdd8",
	                    "EndpointID": "5b3cd34286c5a9024e8f7e7a21a33d684580e327ea100a07771484eaa5ad79b9",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-489104",
	                        "028760fe9fe5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-489104 -n default-k8s-diff-port-489104
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-489104 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-489104 logs -n 25: (1.158128632s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p old-k8s-version-948537 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-948537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:04 UTC │
	│ start   │ -p old-k8s-version-948537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:04 UTC │ 18 Oct 25 15:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-165275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ stop    │ -p no-preload-165275 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-833162    │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-833162    │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ delete  │ -p kubernetes-upgrade-833162                                                                                                                                                                                                                  │ kubernetes-upgrade-833162    │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ addons  │ enable dashboard -p no-preload-165275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p embed-certs-775590 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p no-preload-165275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:06 UTC │
	│ image   │ old-k8s-version-948537 image list --format=json                                                                                                                                                                                               │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ pause   │ -p old-k8s-version-948537 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ delete  │ -p old-k8s-version-948537                                                                                                                                                                                                                     │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ delete  │ -p old-k8s-version-948537                                                                                                                                                                                                                     │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ delete  │ -p disable-driver-mounts-677415                                                                                                                                                                                                               │ disable-driver-mounts-677415 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p default-k8s-diff-port-489104 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p cert-expiration-327346 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-327346       │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ delete  │ -p cert-expiration-327346                                                                                                                                                                                                                     │ cert-expiration-327346       │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p newest-cni-741831 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-775590 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ stop    │ -p embed-certs-775590 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ image   │ no-preload-165275 image list --format=json                                                                                                                                                                                                    │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ pause   │ -p no-preload-165275 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-489104 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:06:18
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:06:18.990992  352142 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:06:18.991107  352142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:06:18.991114  352142 out.go:374] Setting ErrFile to fd 2...
	I1018 15:06:18.991124  352142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:06:18.991316  352142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:06:18.991810  352142 out.go:368] Setting JSON to false
	I1018 15:06:18.993170  352142 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10130,"bootTime":1760789849,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:06:18.993269  352142 start.go:141] virtualization: kvm guest
	I1018 15:06:18.995348  352142 out.go:179] * [newest-cni-741831] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:06:18.996606  352142 notify.go:220] Checking for updates...
	I1018 15:06:18.996634  352142 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:06:18.997879  352142 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:06:18.999081  352142 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:06:19.000329  352142 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 15:06:19.001580  352142 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:06:19.002773  352142 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:06:19.004542  352142 config.go:182] Loaded profile config "default-k8s-diff-port-489104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:19.004705  352142 config.go:182] Loaded profile config "embed-certs-775590": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:19.004931  352142 config.go:182] Loaded profile config "no-preload-165275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:19.005076  352142 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:06:19.029798  352142 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:06:19.029968  352142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:06:19.087262  352142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 15:06:19.076975606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:06:19.087375  352142 docker.go:318] overlay module found
	I1018 15:06:19.089283  352142 out.go:179] * Using the docker driver based on user configuration
	I1018 15:06:16.235796  347067 addons.go:514] duration metric: took 508.874239ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 15:06:16.564682  347067 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-489104" context rescaled to 1 replicas
	W1018 15:06:18.064585  347067 node_ready.go:57] node "default-k8s-diff-port-489104" has "Ready":"False" status (will retry)
	I1018 15:06:19.090309  352142 start.go:305] selected driver: docker
	I1018 15:06:19.090324  352142 start.go:925] validating driver "docker" against <nil>
	I1018 15:06:19.090335  352142 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:06:19.090980  352142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:06:19.147933  352142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 15:06:19.138241028 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:06:19.148135  352142 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1018 15:06:19.148176  352142 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1018 15:06:19.148433  352142 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 15:06:19.150539  352142 out.go:179] * Using Docker driver with root privileges
	I1018 15:06:19.151779  352142 cni.go:84] Creating CNI manager for ""
	I1018 15:06:19.151848  352142 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:06:19.151872  352142 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 15:06:19.151980  352142 start.go:349] cluster config:
	{Name:newest-cni-741831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-741831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:06:19.153327  352142 out.go:179] * Starting "newest-cni-741831" primary control-plane node in "newest-cni-741831" cluster
	I1018 15:06:19.154334  352142 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:06:19.155556  352142 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:06:19.156744  352142 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:06:19.156787  352142 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 15:06:19.156813  352142 cache.go:58] Caching tarball of preloaded images
	I1018 15:06:19.156868  352142 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 15:06:19.156962  352142 preload.go:233] Found /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 15:06:19.156978  352142 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 15:06:19.157137  352142 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/config.json ...
	I1018 15:06:19.157171  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/config.json: {Name:mkd13aa7acfbed253b9ba5a36cce3dfa1f0aceee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:19.176402  352142 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:06:19.176421  352142 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 15:06:19.176437  352142 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:06:19.176475  352142 start.go:360] acquireMachinesLock for newest-cni-741831: {Name:mk05ea0bcc583fa4b3d237c8091a165605e0fbe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:06:19.176588  352142 start.go:364] duration metric: took 94.483µs to acquireMachinesLock for "newest-cni-741831"
	I1018 15:06:19.176621  352142 start.go:93] Provisioning new machine with config: &{Name:newest-cni-741831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-741831 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:06:19.176710  352142 start.go:125] createHost starting for "" (driver="docker")
	W1018 15:06:18.006003  340627 pod_ready.go:104] pod "coredns-66bc5c9577-cmgb8" is not "Ready", error: <nil>
	W1018 15:06:20.007350  340627 pod_ready.go:104] pod "coredns-66bc5c9577-cmgb8" is not "Ready", error: <nil>
	I1018 15:06:22.007480  340627 pod_ready.go:94] pod "coredns-66bc5c9577-cmgb8" is "Ready"
	I1018 15:06:22.007513  340627 pod_ready.go:86] duration metric: took 38.506906838s for pod "coredns-66bc5c9577-cmgb8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.010793  340627 pod_ready.go:83] waiting for pod "etcd-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.015586  340627 pod_ready.go:94] pod "etcd-no-preload-165275" is "Ready"
	I1018 15:06:22.015617  340627 pod_ready.go:86] duration metric: took 4.797501ms for pod "etcd-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.018019  340627 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.022342  340627 pod_ready.go:94] pod "kube-apiserver-no-preload-165275" is "Ready"
	I1018 15:06:22.022370  340627 pod_ready.go:86] duration metric: took 4.328879ms for pod "kube-apiserver-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.024547  340627 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.205070  340627 pod_ready.go:94] pod "kube-controller-manager-no-preload-165275" is "Ready"
	I1018 15:06:22.205105  340627 pod_ready.go:86] duration metric: took 180.535874ms for pod "kube-controller-manager-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.405341  340627 pod_ready.go:83] waiting for pod "kube-proxy-84fhl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:22.804708  340627 pod_ready.go:94] pod "kube-proxy-84fhl" is "Ready"
	I1018 15:06:22.804737  340627 pod_ready.go:86] duration metric: took 399.364412ms for pod "kube-proxy-84fhl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:23.009439  340627 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:23.405543  340627 pod_ready.go:94] pod "kube-scheduler-no-preload-165275" is "Ready"
	I1018 15:06:23.405574  340627 pod_ready.go:86] duration metric: took 396.107038ms for pod "kube-scheduler-no-preload-165275" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:23.405589  340627 pod_ready.go:40] duration metric: took 39.908960633s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:06:23.451163  340627 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:06:23.547653  340627 out.go:179] * Done! kubectl is now configured to use "no-preload-165275" cluster and "default" namespace by default
	I1018 15:06:19.178580  352142 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 15:06:19.178837  352142 start.go:159] libmachine.API.Create for "newest-cni-741831" (driver="docker")
	I1018 15:06:19.178873  352142 client.go:168] LocalClient.Create starting
	I1018 15:06:19.179005  352142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem
	I1018 15:06:19.179061  352142 main.go:141] libmachine: Decoding PEM data...
	I1018 15:06:19.179076  352142 main.go:141] libmachine: Parsing certificate...
	I1018 15:06:19.179132  352142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem
	I1018 15:06:19.179155  352142 main.go:141] libmachine: Decoding PEM data...
	I1018 15:06:19.179164  352142 main.go:141] libmachine: Parsing certificate...
	I1018 15:06:19.179501  352142 cli_runner.go:164] Run: docker network inspect newest-cni-741831 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 15:06:19.196543  352142 cli_runner.go:211] docker network inspect newest-cni-741831 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 15:06:19.196640  352142 network_create.go:284] running [docker network inspect newest-cni-741831] to gather additional debugging logs...
	I1018 15:06:19.196663  352142 cli_runner.go:164] Run: docker network inspect newest-cni-741831
	W1018 15:06:19.213085  352142 cli_runner.go:211] docker network inspect newest-cni-741831 returned with exit code 1
	I1018 15:06:19.213136  352142 network_create.go:287] error running [docker network inspect newest-cni-741831]: docker network inspect newest-cni-741831: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-741831 not found
	I1018 15:06:19.213172  352142 network_create.go:289] output of [docker network inspect newest-cni-741831]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-741831 not found
	
	** /stderr **
	I1018 15:06:19.213347  352142 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:06:19.230587  352142 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-67ded9675d49 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:eb:89:76:0f:a6} reservation:<nil>}
	I1018 15:06:19.231147  352142 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4b365c92bc46 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:db:b6:83:36:75} reservation:<nil>}
	I1018 15:06:19.231748  352142 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ab6063c7cdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:eb:32:cc:ab:b4} reservation:<nil>}
	I1018 15:06:19.232375  352142 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4b571e6f85a5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:35:91:99:08:5b} reservation:<nil>}
	I1018 15:06:19.232993  352142 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-2decf6b0e9a2 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:32:85:60:59:11:56} reservation:<nil>}
	I1018 15:06:19.233747  352142 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f38de0}
	I1018 15:06:19.233775  352142 network_create.go:124] attempt to create docker network newest-cni-741831 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1018 15:06:19.233823  352142 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-741831 newest-cni-741831
	I1018 15:06:19.295382  352142 network_create.go:108] docker network newest-cni-741831 192.168.94.0/24 created
	I1018 15:06:19.295424  352142 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-741831" container
	I1018 15:06:19.295490  352142 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 15:06:19.312794  352142 cli_runner.go:164] Run: docker volume create newest-cni-741831 --label name.minikube.sigs.k8s.io=newest-cni-741831 --label created_by.minikube.sigs.k8s.io=true
	I1018 15:06:19.332326  352142 oci.go:103] Successfully created a docker volume newest-cni-741831
	I1018 15:06:19.332413  352142 cli_runner.go:164] Run: docker run --rm --name newest-cni-741831-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-741831 --entrypoint /usr/bin/test -v newest-cni-741831:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 15:06:19.734788  352142 oci.go:107] Successfully prepared a docker volume newest-cni-741831
	I1018 15:06:19.734843  352142 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:06:19.734868  352142 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 15:06:19.734956  352142 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-741831:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1018 15:06:20.564874  347067 node_ready.go:57] node "default-k8s-diff-port-489104" has "Ready":"False" status (will retry)
	W1018 15:06:22.565092  347067 node_ready.go:57] node "default-k8s-diff-port-489104" has "Ready":"False" status (will retry)
	I1018 15:06:24.339197  352142 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-741831:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.604197145s)
	I1018 15:06:24.339229  352142 kic.go:203] duration metric: took 4.604355206s to extract preloaded images to volume ...
	W1018 15:06:24.339333  352142 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 15:06:24.339364  352142 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 15:06:24.339401  352142 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 15:06:24.406366  352142 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-741831 --name newest-cni-741831 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-741831 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-741831 --network newest-cni-741831 --ip 192.168.94.2 --volume newest-cni-741831:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 15:06:24.727314  352142 cli_runner.go:164] Run: docker container inspect newest-cni-741831 --format={{.State.Running}}
	I1018 15:06:24.750170  352142 cli_runner.go:164] Run: docker container inspect newest-cni-741831 --format={{.State.Status}}
	I1018 15:06:24.774109  352142 cli_runner.go:164] Run: docker exec newest-cni-741831 stat /var/lib/dpkg/alternatives/iptables
	I1018 15:06:24.826218  352142 oci.go:144] the created container "newest-cni-741831" has a running status.
	I1018 15:06:24.826247  352142 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa...
	I1018 15:06:25.591975  352142 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 15:06:25.618152  352142 cli_runner.go:164] Run: docker container inspect newest-cni-741831 --format={{.State.Status}}
	I1018 15:06:25.635630  352142 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 15:06:25.635652  352142 kic_runner.go:114] Args: [docker exec --privileged newest-cni-741831 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 15:06:25.683939  352142 cli_runner.go:164] Run: docker container inspect newest-cni-741831 --format={{.State.Status}}
	I1018 15:06:25.701188  352142 machine.go:93] provisionDockerMachine start ...
	I1018 15:06:25.701290  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:25.719680  352142 main.go:141] libmachine: Using SSH client type: native
	I1018 15:06:25.720029  352142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1018 15:06:25.720060  352142 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:06:25.854071  352142 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-741831
	
	I1018 15:06:25.854106  352142 ubuntu.go:182] provisioning hostname "newest-cni-741831"
	I1018 15:06:25.854160  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:25.872062  352142 main.go:141] libmachine: Using SSH client type: native
	I1018 15:06:25.872341  352142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1018 15:06:25.872365  352142 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-741831 && echo "newest-cni-741831" | sudo tee /etc/hostname
	I1018 15:06:26.015459  352142 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-741831
	
	I1018 15:06:26.015545  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:26.033766  352142 main.go:141] libmachine: Using SSH client type: native
	I1018 15:06:26.034053  352142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1018 15:06:26.034076  352142 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-741831' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-741831/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-741831' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:06:26.171352  352142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:06:26.171386  352142 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 15:06:26.171426  352142 ubuntu.go:190] setting up certificates
	I1018 15:06:26.171441  352142 provision.go:84] configureAuth start
	I1018 15:06:26.171503  352142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-741831
	I1018 15:06:26.190241  352142 provision.go:143] copyHostCerts
	I1018 15:06:26.190312  352142 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem, removing ...
	I1018 15:06:26.190325  352142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:06:26.190406  352142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 15:06:26.190521  352142 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem, removing ...
	I1018 15:06:26.190537  352142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:06:26.190580  352142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 15:06:26.190670  352142 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem, removing ...
	I1018 15:06:26.190681  352142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:06:26.190722  352142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 15:06:26.190798  352142 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.newest-cni-741831 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-741831]
	I1018 15:06:26.528284  352142 provision.go:177] copyRemoteCerts
	I1018 15:06:26.528341  352142 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:06:26.528375  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:26.546905  352142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:26.644596  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:06:26.665034  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 15:06:26.683543  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 15:06:26.701141  352142 provision.go:87] duration metric: took 529.670696ms to configureAuth
	I1018 15:06:26.701174  352142 ubuntu.go:206] setting minikube options for container-runtime
	I1018 15:06:26.701364  352142 config.go:182] Loaded profile config "newest-cni-741831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:26.701496  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:26.719555  352142 main.go:141] libmachine: Using SSH client type: native
	I1018 15:06:26.719765  352142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1018 15:06:26.719782  352142 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:06:26.970657  352142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:06:26.970682  352142 machine.go:96] duration metric: took 1.269467705s to provisionDockerMachine
	I1018 15:06:26.970692  352142 client.go:171] duration metric: took 7.791810529s to LocalClient.Create
	I1018 15:06:26.970712  352142 start.go:167] duration metric: took 7.791877225s to libmachine.API.Create "newest-cni-741831"
	I1018 15:06:26.970719  352142 start.go:293] postStartSetup for "newest-cni-741831" (driver="docker")
	I1018 15:06:26.970729  352142 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:06:26.970806  352142 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:06:26.970861  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:26.988335  352142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:27.087221  352142 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:06:27.090783  352142 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 15:06:27.090809  352142 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 15:06:27.090827  352142 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/addons for local assets ...
	I1018 15:06:27.090877  352142 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/files for local assets ...
	I1018 15:06:27.090972  352142 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem -> 931872.pem in /etc/ssl/certs
	I1018 15:06:27.091056  352142 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:06:27.098707  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:06:27.118867  352142 start.go:296] duration metric: took 148.132063ms for postStartSetup
	I1018 15:06:27.119258  352142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-741831
	I1018 15:06:27.138075  352142 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/config.json ...
	I1018 15:06:27.138321  352142 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 15:06:27.138366  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:27.155272  352142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:27.249460  352142 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 15:06:27.254563  352142 start.go:128] duration metric: took 8.077835013s to createHost
	I1018 15:06:27.254590  352142 start.go:83] releasing machines lock for "newest-cni-741831", held for 8.077985561s
	I1018 15:06:27.254660  352142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-741831
	I1018 15:06:27.273539  352142 ssh_runner.go:195] Run: cat /version.json
	I1018 15:06:27.273588  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:27.273628  352142 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:06:27.273693  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:27.291712  352142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:27.292133  352142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:27.438032  352142 ssh_runner.go:195] Run: systemctl --version
	I1018 15:06:27.444732  352142 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:06:27.480771  352142 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:06:27.485774  352142 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:06:27.485841  352142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:06:27.512064  352142 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 15:06:27.512089  352142 start.go:495] detecting cgroup driver to use...
	I1018 15:06:27.512126  352142 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 15:06:27.512175  352142 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:06:27.528665  352142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:06:27.541203  352142 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:06:27.541255  352142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:06:27.557700  352142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:06:27.577069  352142 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:06:27.661864  352142 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:06:27.751078  352142 docker.go:234] disabling docker service ...
	I1018 15:06:27.751149  352142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:06:27.771123  352142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:06:27.787019  352142 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:06:27.884416  352142 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:06:27.973822  352142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:06:27.986604  352142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:06:28.000991  352142 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 15:06:28.001058  352142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.011828  352142 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 15:06:28.011896  352142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.020931  352142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.030085  352142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.039092  352142 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:06:28.047412  352142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.055961  352142 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.069830  352142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:06:28.079271  352142 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:06:28.087557  352142 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:06:28.095726  352142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:06:28.204871  352142 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:06:28.308340  352142 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:06:28.308400  352142 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:06:28.312652  352142 start.go:563] Will wait 60s for crictl version
	I1018 15:06:28.312706  352142 ssh_runner.go:195] Run: which crictl
	I1018 15:06:28.316479  352142 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 15:06:28.342582  352142 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 15:06:28.342759  352142 ssh_runner.go:195] Run: crio --version
	I1018 15:06:28.371661  352142 ssh_runner.go:195] Run: crio --version
	I1018 15:06:28.404027  352142 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 15:06:28.405208  352142 cli_runner.go:164] Run: docker network inspect newest-cni-741831 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:06:28.422412  352142 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 15:06:28.426696  352142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:06:28.438922  352142 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 15:06:28.440159  352142 kubeadm.go:883] updating cluster {Name:newest-cni-741831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-741831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 15:06:28.440298  352142 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:06:28.440369  352142 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:06:28.471339  352142 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:06:28.471358  352142 crio.go:433] Images already preloaded, skipping extraction
	I1018 15:06:28.471399  352142 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:06:28.498054  352142 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:06:28.498077  352142 cache_images.go:85] Images are preloaded, skipping loading
	I1018 15:06:28.498085  352142 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1018 15:06:28.498165  352142 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-741831 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-741831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 15:06:28.498226  352142 ssh_runner.go:195] Run: crio config
	I1018 15:06:28.544284  352142 cni.go:84] Creating CNI manager for ""
	I1018 15:06:28.544310  352142 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:06:28.544334  352142 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 15:06:28.544364  352142 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-741831 NodeName:newest-cni-741831 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 15:06:28.544529  352142 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-741831"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 15:06:28.544591  352142 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 15:06:28.552919  352142 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 15:06:28.552987  352142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 15:06:28.560695  352142 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 15:06:28.573650  352142 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 15:06:28.589169  352142 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1018 15:06:28.602324  352142 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 15:06:28.606123  352142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:06:28.616292  352142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:06:28.702657  352142 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:06:28.728867  352142 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831 for IP: 192.168.94.2
	I1018 15:06:28.728898  352142 certs.go:195] generating shared ca certs ...
	I1018 15:06:28.728944  352142 certs.go:227] acquiring lock for ca certs: {Name:mk6d3b4e44856f498157352ffe8bb89d2c7a3998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:28.729163  352142 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key
	I1018 15:06:28.729240  352142 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key
	I1018 15:06:28.729254  352142 certs.go:257] generating profile certs ...
	I1018 15:06:28.729414  352142 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/client.key
	I1018 15:06:28.729451  352142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/client.crt with IP's: []
	I1018 15:06:28.792470  352142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/client.crt ...
	I1018 15:06:28.792500  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/client.crt: {Name:mke8e96a052b8eb8b398b73425f8e5ee1007513d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:28.792716  352142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/client.key ...
	I1018 15:06:28.792733  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/client.key: {Name:mk9c5cc06cccf0052c525e1e52278d7f0300c686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:28.792854  352142 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.key.5b00fbe4
	I1018 15:06:28.792878  352142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.crt.5b00fbe4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	W1018 15:06:24.565716  347067 node_ready.go:57] node "default-k8s-diff-port-489104" has "Ready":"False" status (will retry)
	W1018 15:06:27.064596  347067 node_ready.go:57] node "default-k8s-diff-port-489104" has "Ready":"False" status (will retry)
	I1018 15:06:28.065074  347067 node_ready.go:49] node "default-k8s-diff-port-489104" is "Ready"
	I1018 15:06:28.065102  347067 node_ready.go:38] duration metric: took 12.003457865s for node "default-k8s-diff-port-489104" to be "Ready" ...
	I1018 15:06:28.065119  347067 api_server.go:52] waiting for apiserver process to appear ...
	I1018 15:06:28.065157  347067 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:06:28.076701  347067 api_server.go:72] duration metric: took 12.349786258s to wait for apiserver process to appear ...
	I1018 15:06:28.076733  347067 api_server.go:88] waiting for apiserver healthz status ...
	I1018 15:06:28.076752  347067 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1018 15:06:28.081593  347067 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1018 15:06:28.082688  347067 api_server.go:141] control plane version: v1.34.1
	I1018 15:06:28.082715  347067 api_server.go:131] duration metric: took 5.974362ms to wait for apiserver health ...
	I1018 15:06:28.082726  347067 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:06:28.086013  347067 system_pods.go:59] 8 kube-system pods found
	I1018 15:06:28.086058  347067 system_pods.go:61] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:06:28.086070  347067 system_pods.go:61] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running
	I1018 15:06:28.086083  347067 system_pods.go:61] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:06:28.086088  347067 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running
	I1018 15:06:28.086097  347067 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running
	I1018 15:06:28.086103  347067 system_pods.go:61] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:06:28.086110  347067 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running
	I1018 15:06:28.086118  347067 system_pods.go:61] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:06:28.086130  347067 system_pods.go:74] duration metric: took 3.396495ms to wait for pod list to return data ...
	I1018 15:06:28.086142  347067 default_sa.go:34] waiting for default service account to be created ...
	I1018 15:06:28.088698  347067 default_sa.go:45] found service account: "default"
	I1018 15:06:28.088719  347067 default_sa.go:55] duration metric: took 2.569918ms for default service account to be created ...
	I1018 15:06:28.088729  347067 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 15:06:28.091501  347067 system_pods.go:86] 8 kube-system pods found
	I1018 15:06:28.091528  347067 system_pods.go:89] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:06:28.091534  347067 system_pods.go:89] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running
	I1018 15:06:28.091540  347067 system_pods.go:89] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:06:28.091543  347067 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running
	I1018 15:06:28.091547  347067 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running
	I1018 15:06:28.091550  347067 system_pods.go:89] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:06:28.091554  347067 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running
	I1018 15:06:28.091558  347067 system_pods.go:89] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:06:28.091576  347067 retry.go:31] will retry after 228.914741ms: missing components: kube-dns
	I1018 15:06:28.325222  347067 system_pods.go:86] 8 kube-system pods found
	I1018 15:06:28.325259  347067 system_pods.go:89] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:06:28.325267  347067 system_pods.go:89] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running
	I1018 15:06:28.325275  347067 system_pods.go:89] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:06:28.325281  347067 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running
	I1018 15:06:28.325287  347067 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running
	I1018 15:06:28.325292  347067 system_pods.go:89] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:06:28.325297  347067 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running
	I1018 15:06:28.325304  347067 system_pods.go:89] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:06:28.325327  347067 retry.go:31] will retry after 353.361454ms: missing components: kube-dns
	I1018 15:06:28.682887  347067 system_pods.go:86] 8 kube-system pods found
	I1018 15:06:28.682948  347067 system_pods.go:89] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:06:28.682958  347067 system_pods.go:89] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running
	I1018 15:06:28.682966  347067 system_pods.go:89] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:06:28.682974  347067 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running
	I1018 15:06:28.682981  347067 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running
	I1018 15:06:28.682991  347067 system_pods.go:89] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:06:28.682997  347067 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running
	I1018 15:06:28.683008  347067 system_pods.go:89] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:06:28.683029  347067 retry.go:31] will retry after 298.181886ms: missing components: kube-dns
	I1018 15:06:28.986254  347067 system_pods.go:86] 8 kube-system pods found
	I1018 15:06:28.986282  347067 system_pods.go:89] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Running
	I1018 15:06:28.986288  347067 system_pods.go:89] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running
	I1018 15:06:28.986292  347067 system_pods.go:89] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:06:28.986296  347067 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running
	I1018 15:06:28.986299  347067 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running
	I1018 15:06:28.986302  347067 system_pods.go:89] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:06:28.986305  347067 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running
	I1018 15:06:28.986308  347067 system_pods.go:89] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Running
	I1018 15:06:28.986316  347067 system_pods.go:126] duration metric: took 897.58086ms to wait for k8s-apps to be running ...
	I1018 15:06:28.986323  347067 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 15:06:28.986366  347067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:06:28.999817  347067 system_svc.go:56] duration metric: took 13.480567ms WaitForService to wait for kubelet
	I1018 15:06:28.999843  347067 kubeadm.go:586] duration metric: took 13.272933961s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:06:28.999865  347067 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:06:29.003008  347067 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 15:06:29.003035  347067 node_conditions.go:123] node cpu capacity is 8
	I1018 15:06:29.003050  347067 node_conditions.go:105] duration metric: took 3.181093ms to run NodePressure ...
	I1018 15:06:29.003062  347067 start.go:241] waiting for startup goroutines ...
	I1018 15:06:29.003069  347067 start.go:246] waiting for cluster config update ...
	I1018 15:06:29.003089  347067 start.go:255] writing updated cluster config ...
	I1018 15:06:29.003370  347067 ssh_runner.go:195] Run: rm -f paused
	I1018 15:06:29.007398  347067 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:06:29.011225  347067 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dtjgd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.015433  347067 pod_ready.go:94] pod "coredns-66bc5c9577-dtjgd" is "Ready"
	I1018 15:06:29.015452  347067 pod_ready.go:86] duration metric: took 4.205314ms for pod "coredns-66bc5c9577-dtjgd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.017638  347067 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.021381  347067 pod_ready.go:94] pod "etcd-default-k8s-diff-port-489104" is "Ready"
	I1018 15:06:29.021399  347067 pod_ready.go:86] duration metric: took 3.738445ms for pod "etcd-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.023308  347067 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.026567  347067 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-489104" is "Ready"
	I1018 15:06:29.026587  347067 pod_ready.go:86] duration metric: took 3.257885ms for pod "kube-apiserver-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.028296  347067 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.063010  352142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.crt.5b00fbe4 ...
	I1018 15:06:29.063038  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.crt.5b00fbe4: {Name:mk3d0668ddae7d28b699df3536f8e4c4c7dbf460 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:29.063212  352142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.key.5b00fbe4 ...
	I1018 15:06:29.063226  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.key.5b00fbe4: {Name:mkd60891ad06419625ec1cb1227353159cfb6546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:29.063304  352142 certs.go:382] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.crt.5b00fbe4 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.crt
	I1018 15:06:29.063375  352142 certs.go:386] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.key.5b00fbe4 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.key
	I1018 15:06:29.063429  352142 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.key
	I1018 15:06:29.063450  352142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.crt with IP's: []
	I1018 15:06:29.291547  352142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.crt ...
	I1018 15:06:29.291575  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.crt: {Name:mk9053fa1d59e516145d535ccf928a7a4620007b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:29.291747  352142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.key ...
	I1018 15:06:29.291760  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.key: {Name:mk718982611c021d2ca690df47a58e465ee8a410 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:29.291962  352142 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem (1338 bytes)
	W1018 15:06:29.292002  352142 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187_empty.pem, impossibly tiny 0 bytes
	I1018 15:06:29.292011  352142 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 15:06:29.292032  352142 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem (1082 bytes)
	I1018 15:06:29.292057  352142 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem (1123 bytes)
	I1018 15:06:29.292078  352142 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem (1675 bytes)
	I1018 15:06:29.292125  352142 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:06:29.292759  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 15:06:29.311389  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 15:06:29.329272  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 15:06:29.346654  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 15:06:29.364002  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 15:06:29.382237  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 15:06:29.400409  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 15:06:29.420478  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/newest-cni-741831/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 15:06:29.440286  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 15:06:29.460529  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem --> /usr/share/ca-certificates/93187.pem (1338 bytes)
	I1018 15:06:29.478328  352142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /usr/share/ca-certificates/931872.pem (1708 bytes)
	I1018 15:06:29.495696  352142 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 15:06:29.508533  352142 ssh_runner.go:195] Run: openssl version
	I1018 15:06:29.514752  352142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 15:06:29.523282  352142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:06:29.527456  352142 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:06:29.527507  352142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:06:29.562619  352142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 15:06:29.573830  352142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93187.pem && ln -fs /usr/share/ca-certificates/93187.pem /etc/ssl/certs/93187.pem"
	I1018 15:06:29.582909  352142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93187.pem
	I1018 15:06:29.587243  352142 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 14:26 /usr/share/ca-certificates/93187.pem
	I1018 15:06:29.587318  352142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93187.pem
	I1018 15:06:29.624088  352142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93187.pem /etc/ssl/certs/51391683.0"
	I1018 15:06:29.633601  352142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/931872.pem && ln -fs /usr/share/ca-certificates/931872.pem /etc/ssl/certs/931872.pem"
	I1018 15:06:29.642526  352142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/931872.pem
	I1018 15:06:29.646524  352142 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 14:26 /usr/share/ca-certificates/931872.pem
	I1018 15:06:29.646586  352142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/931872.pem
	I1018 15:06:29.681559  352142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/931872.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 15:06:29.690980  352142 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 15:06:29.694862  352142 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 15:06:29.694929  352142 kubeadm.go:400] StartCluster: {Name:newest-cni-741831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-741831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:06:29.695023  352142 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 15:06:29.695110  352142 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 15:06:29.723568  352142 cri.go:89] found id: ""
	I1018 15:06:29.723638  352142 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 15:06:29.731906  352142 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 15:06:29.740249  352142 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 15:06:29.740294  352142 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 15:06:29.748230  352142 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 15:06:29.748252  352142 kubeadm.go:157] found existing configuration files:
	
	I1018 15:06:29.748291  352142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 15:06:29.756351  352142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 15:06:29.756404  352142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 15:06:29.764376  352142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 15:06:29.772207  352142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 15:06:29.772260  352142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 15:06:29.779898  352142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 15:06:29.787823  352142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 15:06:29.787891  352142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 15:06:29.795735  352142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 15:06:29.803573  352142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 15:06:29.803624  352142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 15:06:29.811038  352142 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 15:06:29.850187  352142 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 15:06:29.850272  352142 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 15:06:29.871064  352142 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 15:06:29.871172  352142 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 15:06:29.871247  352142 kubeadm.go:318] OS: Linux
	I1018 15:06:29.871372  352142 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 15:06:29.871447  352142 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 15:06:29.871518  352142 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 15:06:29.871595  352142 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 15:06:29.871671  352142 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 15:06:29.871761  352142 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 15:06:29.871839  352142 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 15:06:29.871898  352142 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 15:06:29.935613  352142 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 15:06:29.935785  352142 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 15:06:29.935942  352142 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 15:06:29.943662  352142 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 15:06:29.412888  347067 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-489104" is "Ready"
	I1018 15:06:29.412928  347067 pod_ready.go:86] duration metric: took 384.611351ms for pod "kube-controller-manager-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:29.612847  347067 pod_ready.go:83] waiting for pod "kube-proxy-7wbfs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:30.011983  347067 pod_ready.go:94] pod "kube-proxy-7wbfs" is "Ready"
	I1018 15:06:30.012012  347067 pod_ready.go:86] duration metric: took 399.134641ms for pod "kube-proxy-7wbfs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:30.211949  347067 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:30.612493  347067 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-489104" is "Ready"
	I1018 15:06:30.612525  347067 pod_ready.go:86] duration metric: took 400.540545ms for pod "kube-scheduler-default-k8s-diff-port-489104" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:06:30.612538  347067 pod_ready.go:40] duration metric: took 1.60510698s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:06:30.661514  347067 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:06:30.663459  347067 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-489104" cluster and "default" namespace by default
	I1018 15:06:29.948047  352142 out.go:252]   - Generating certificates and keys ...
	I1018 15:06:29.948147  352142 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 15:06:29.948229  352142 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 15:06:30.250963  352142 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 15:06:30.366731  352142 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 15:06:30.535222  352142 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 15:06:30.853257  352142 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 15:06:31.046320  352142 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 15:06:31.046555  352142 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-741831] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 15:06:31.171804  352142 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 15:06:31.172019  352142 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-741831] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 15:06:32.275618  352142 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 15:06:33.097457  352142 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 15:06:33.197652  352142 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 15:06:33.197773  352142 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 15:06:33.308356  352142 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 15:06:33.547102  352142 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 15:06:33.677173  352142 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 15:06:34.208214  352142 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 15:06:34.302781  352142 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 15:06:34.303476  352142 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 15:06:34.308455  352142 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 15:06:34.309825  352142 out.go:252]   - Booting up control plane ...
	I1018 15:06:34.309970  352142 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 15:06:34.310108  352142 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 15:06:34.311099  352142 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 15:06:34.325184  352142 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 15:06:34.325348  352142 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 15:06:34.331880  352142 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 15:06:34.332079  352142 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 15:06:34.332147  352142 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 15:06:34.435219  352142 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 15:06:34.435411  352142 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 15:06:35.436962  352142 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001878752s
	I1018 15:06:35.440026  352142 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 15:06:35.440152  352142 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1018 15:06:35.440300  352142 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 15:06:35.440434  352142 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 15:06:36.457261  352142 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.017092353s
	I1018 15:06:37.533342  352142 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.093263869s
	I1018 15:06:39.441806  352142 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001746156s
	I1018 15:06:39.454673  352142 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 15:06:39.467537  352142 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 15:06:39.478601  352142 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 15:06:39.479038  352142 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-741831 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 15:06:39.487586  352142 kubeadm.go:318] [bootstrap-token] Using token: 02v8kq.2hfgbddxyy4lzjzq
	
	
	==> CRI-O <==
	Oct 18 15:06:28 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:28.168117652Z" level=info msg="Starting container: b3ee48cd8c01164c66a9ce191025e9ac2de0d90624ac0304cc83960c3b40cd1a" id=0f51f8cc-cda9-4eda-b94b-56153a74d909 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:06:28 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:28.170252833Z" level=info msg="Started container" PID=1849 containerID=b3ee48cd8c01164c66a9ce191025e9ac2de0d90624ac0304cc83960c3b40cd1a description=kube-system/coredns-66bc5c9577-dtjgd/coredns id=0f51f8cc-cda9-4eda-b94b-56153a74d909 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6e1ce00289057ac3698f195e4d79697b99b06b038798e6fb722c35e176315cef
	Oct 18 15:06:31 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:31.153120525Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b2d192af-395d-400c-9550-fc4d409c7956 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:06:31 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:31.153229194Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:31 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:31.158605755Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f3fca62f705b4d52fea6723d5a87f922a660d8a7a747e55e08ade8c8b0537b28 UID:2ca10c14-7bb9-43cd-9a37-bb2e16dc4b95 NetNS:/var/run/netns/cde753a9-0f85-4a8e-852b-a422b71a448a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00031a5b8}] Aliases:map[]}"
	Oct 18 15:06:31 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:31.158642182Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 15:06:31 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:31.16961338Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f3fca62f705b4d52fea6723d5a87f922a660d8a7a747e55e08ade8c8b0537b28 UID:2ca10c14-7bb9-43cd-9a37-bb2e16dc4b95 NetNS:/var/run/netns/cde753a9-0f85-4a8e-852b-a422b71a448a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00031a5b8}] Aliases:map[]}"
	Oct 18 15:06:31 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:31.169759121Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 15:06:31 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:31.170893782Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 15:06:31 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:31.171901093Z" level=info msg="Ran pod sandbox f3fca62f705b4d52fea6723d5a87f922a660d8a7a747e55e08ade8c8b0537b28 with infra container: default/busybox/POD" id=b2d192af-395d-400c-9550-fc4d409c7956 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:06:31 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:31.173218279Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=06961f31-bc70-4bd0-b4a9-5cc27e7b19d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:31 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:31.173358609Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=06961f31-bc70-4bd0-b4a9-5cc27e7b19d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:31 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:31.173407738Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=06961f31-bc70-4bd0-b4a9-5cc27e7b19d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:31 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:31.174134218Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=44448b6d-36c1-4b81-99cd-21abb580e034 name=/runtime.v1.ImageService/PullImage
	Oct 18 15:06:31 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:31.177668995Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 15:06:33 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:33.208100565Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=44448b6d-36c1-4b81-99cd-21abb580e034 name=/runtime.v1.ImageService/PullImage
	Oct 18 15:06:33 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:33.208935632Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f71a029d-a57b-42e3-9a71-1106c4c9c652 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:33 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:33.210353785Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4756c974-5054-45df-84a7-5c1f9da5738d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:33 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:33.213902916Z" level=info msg="Creating container: default/busybox/busybox" id=b51f5dbf-e028-46a3-b231-53b347f5cc42 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:06:33 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:33.214746735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:33 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:33.218998957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:33 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:33.219533403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:33 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:33.241658966Z" level=info msg="Created container 3293bb16e0451b40ae2ecede9f6736398b23ca5e2d49ef88d315f411467790cf: default/busybox/busybox" id=b51f5dbf-e028-46a3-b231-53b347f5cc42 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:06:33 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:33.242312739Z" level=info msg="Starting container: 3293bb16e0451b40ae2ecede9f6736398b23ca5e2d49ef88d315f411467790cf" id=41bba97a-16b4-4784-91cc-af67453749ef name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:06:33 default-k8s-diff-port-489104 crio[777]: time="2025-10-18T15:06:33.244217444Z" level=info msg="Started container" PID=1924 containerID=3293bb16e0451b40ae2ecede9f6736398b23ca5e2d49ef88d315f411467790cf description=default/busybox/busybox id=41bba97a-16b4-4784-91cc-af67453749ef name=/runtime.v1.RuntimeService/StartContainer sandboxID=f3fca62f705b4d52fea6723d5a87f922a660d8a7a747e55e08ade8c8b0537b28
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	3293bb16e0451       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   f3fca62f705b4       busybox                                                default
	b3ee48cd8c011       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   6e1ce00289057       coredns-66bc5c9577-dtjgd                               kube-system
	cc9c260fc55ab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   0c2abf25116ab       storage-provisioner                                    kube-system
	62610c6f2c0e9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   6e0e6f7e4177e       kube-proxy-7wbfs                                       kube-system
	f74cb2434c9a3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   bf6a981d51712       kindnet-nvnw6                                          kube-system
	a9c3bc6c3a7f2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   6313f0ae64f2b       kube-apiserver-default-k8s-diff-port-489104            kube-system
	acd6379046acc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   ae615dd57280f       etcd-default-k8s-diff-port-489104                      kube-system
	e03a73d5d1b64       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   40554b80ec1dc       kube-controller-manager-default-k8s-diff-port-489104   kube-system
	8a42604e1c7c7       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   16f3c82784135       kube-scheduler-default-k8s-diff-port-489104            kube-system
	
	
	==> coredns [b3ee48cd8c01164c66a9ce191025e9ac2de0d90624ac0304cc83960c3b40cd1a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55632 - 58039 "HINFO IN 396386913973946141.5636930644699978753. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.110962056s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-489104
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-489104
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=default-k8s-diff-port-489104
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_06_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:06:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-489104
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:06:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:06:31 +0000   Sat, 18 Oct 2025 15:06:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:06:31 +0000   Sat, 18 Oct 2025 15:06:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:06:31 +0000   Sat, 18 Oct 2025 15:06:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 15:06:31 +0000   Sat, 18 Oct 2025 15:06:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-489104
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                2a8259a7-7ba4-40c3-bcf3-f004f9ae6965
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-dtjgd                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-489104                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-nvnw6                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-489104             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-489104    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-7wbfs                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-489104             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node default-k8s-diff-port-489104 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node default-k8s-diff-port-489104 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node default-k8s-diff-port-489104 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-489104 event: Registered Node default-k8s-diff-port-489104 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-489104 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [acd6379046accf9ecc71e351e31e553198faa443f5de3977cfc72269bdfd85ae] <==
	{"level":"warn","ts":"2025-10-18T15:06:07.686769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.695868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.702757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.709652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.715994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.722173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.728834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.735646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.742101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.748283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.755615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.762150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.769548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.775936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.782988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.789482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.795494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.803305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.810355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.818089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.833137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.836557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.842932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.849427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:07.903250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45964","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:06:40 up  2:49,  0 user,  load average: 2.88, 2.80, 1.95
	Linux default-k8s-diff-port-489104 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f74cb2434c9a356dec18b18818aee59d33a10a7901ff91693a1e29ceb3de07cd] <==
	I1018 15:06:17.052434       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 15:06:17.052719       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 15:06:17.052849       1 main.go:148] setting mtu 1500 for CNI 
	I1018 15:06:17.052868       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 15:06:17.052892       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T15:06:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 15:06:17.252382       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 15:06:17.253533       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 15:06:17.253555       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 15:06:17.276904       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 15:06:17.677172       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 15:06:17.677206       1 metrics.go:72] Registering metrics
	I1018 15:06:17.677277       1 controller.go:711] "Syncing nftables rules"
	I1018 15:06:27.252399       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:06:27.252467       1 main.go:301] handling current node
	I1018 15:06:37.255522       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:06:37.255558       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a9c3bc6c3a7f24fc3dab52936b9959045377310b2a714dfb18a94ac480fa5990] <==
	E1018 15:06:08.442244       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1018 15:06:08.469463       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 15:06:08.473968       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:06:08.474140       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 15:06:08.479883       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:06:08.479907       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 15:06:08.645621       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 15:06:09.272117       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 15:06:09.275928       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 15:06:09.275944       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:06:09.744844       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:06:09.781275       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:06:09.878002       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 15:06:09.883955       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1018 15:06:09.885085       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 15:06:09.889089       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 15:06:10.305261       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 15:06:10.725993       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 15:06:10.741610       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 15:06:10.753945       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 15:06:16.009015       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 15:06:16.308889       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:06:16.312941       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:06:16.407170       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1018 15:06:38.975032       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:48392: use of closed network connection
	
	
	==> kube-controller-manager [e03a73d5d1b64add5f24d47559379d0c6b6115badf528f9f3db5a05bcf64a648] <==
	I1018 15:06:15.260962       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 15:06:15.262196       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 15:06:15.265618       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-489104" podCIDRs=["10.244.0.0/24"]
	I1018 15:06:15.288452       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 15:06:15.303298       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 15:06:15.303325       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 15:06:15.303337       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 15:06:15.304353       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 15:06:15.304400       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 15:06:15.304426       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 15:06:15.304539       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 15:06:15.304642       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 15:06:15.304656       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 15:06:15.304696       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 15:06:15.304711       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 15:06:15.304725       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 15:06:15.304771       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 15:06:15.304906       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 15:06:15.305180       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 15:06:15.309747       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 15:06:15.312043       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 15:06:15.318221       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 15:06:15.321409       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 15:06:15.332958       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 15:06:30.256666       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [62610c6f2c0e9981da8f1449a616836c86b8010ec9b370d784985645b7049874] <==
	I1018 15:06:16.831018       1 server_linux.go:53] "Using iptables proxy"
	I1018 15:06:16.881676       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 15:06:16.982299       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 15:06:16.982368       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1018 15:06:16.982490       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 15:06:17.003079       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 15:06:17.003157       1 server_linux.go:132] "Using iptables Proxier"
	I1018 15:06:17.009587       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 15:06:17.010069       1 server.go:527] "Version info" version="v1.34.1"
	I1018 15:06:17.010119       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:06:17.011612       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 15:06:17.011819       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 15:06:17.011858       1 config.go:200] "Starting service config controller"
	I1018 15:06:17.011864       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 15:06:17.011888       1 config.go:106] "Starting endpoint slice config controller"
	I1018 15:06:17.011897       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 15:06:17.011735       1 config.go:309] "Starting node config controller"
	I1018 15:06:17.011936       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 15:06:17.011943       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 15:06:17.112417       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 15:06:17.112457       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 15:06:17.112427       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8a42604e1c7c7e7bfd99f34fa7f1c04142369d9e671a3eac08911232725fb6c5] <==
	E1018 15:06:08.340201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 15:06:08.340260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 15:06:08.340184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 15:06:08.340305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 15:06:08.340349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 15:06:08.340375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 15:06:08.340379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 15:06:08.340392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 15:06:08.340454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 15:06:08.340465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 15:06:08.340479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 15:06:08.340541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 15:06:08.340563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 15:06:08.340547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 15:06:08.340598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 15:06:08.340613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 15:06:09.184765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 15:06:09.217110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 15:06:09.231317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 15:06:09.315948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 15:06:09.352663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 15:06:09.369968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 15:06:09.494284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 15:06:09.530585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1018 15:06:12.438023       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 15:06:11 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:11.670788    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-489104" podStartSLOduration=1.6707705750000001 podStartE2EDuration="1.670770575s" podCreationTimestamp="2025-10-18 15:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:06:11.67068216 +0000 UTC m=+1.154986768" watchObservedRunningTime="2025-10-18 15:06:11.670770575 +0000 UTC m=+1.155075162"
	Oct 18 15:06:11 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:11.691156    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-489104" podStartSLOduration=1.691131955 podStartE2EDuration="1.691131955s" podCreationTimestamp="2025-10-18 15:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:06:11.680426243 +0000 UTC m=+1.164730851" watchObservedRunningTime="2025-10-18 15:06:11.691131955 +0000 UTC m=+1.175436549"
	Oct 18 15:06:11 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:11.691304    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-489104" podStartSLOduration=1.691295591 podStartE2EDuration="1.691295591s" podCreationTimestamp="2025-10-18 15:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:06:11.691107533 +0000 UTC m=+1.175412131" watchObservedRunningTime="2025-10-18 15:06:11.691295591 +0000 UTC m=+1.175600200"
	Oct 18 15:06:11 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:11.700320    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-489104" podStartSLOduration=1.7002967 podStartE2EDuration="1.7002967s" podCreationTimestamp="2025-10-18 15:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:06:11.70023771 +0000 UTC m=+1.184542320" watchObservedRunningTime="2025-10-18 15:06:11.7002967 +0000 UTC m=+1.184601309"
	Oct 18 15:06:15 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:15.359544    1332 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 15:06:15 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:15.360686    1332 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 15:06:16 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:16.440070    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7345c2df-3019-4a83-96fc-e02f3704703c-cni-cfg\") pod \"kindnet-nvnw6\" (UID: \"7345c2df-3019-4a83-96fc-e02f3704703c\") " pod="kube-system/kindnet-nvnw6"
	Oct 18 15:06:16 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:16.440140    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7345c2df-3019-4a83-96fc-e02f3704703c-lib-modules\") pod \"kindnet-nvnw6\" (UID: \"7345c2df-3019-4a83-96fc-e02f3704703c\") " pod="kube-system/kindnet-nvnw6"
	Oct 18 15:06:16 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:16.440165    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hrn2\" (UniqueName: \"kubernetes.io/projected/7345c2df-3019-4a83-96fc-e02f3704703c-kube-api-access-7hrn2\") pod \"kindnet-nvnw6\" (UID: \"7345c2df-3019-4a83-96fc-e02f3704703c\") " pod="kube-system/kindnet-nvnw6"
	Oct 18 15:06:16 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:16.440194    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7345c2df-3019-4a83-96fc-e02f3704703c-xtables-lock\") pod \"kindnet-nvnw6\" (UID: \"7345c2df-3019-4a83-96fc-e02f3704703c\") " pod="kube-system/kindnet-nvnw6"
	Oct 18 15:06:16 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:16.541518    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fad0f99a-9792-4603-b5d4-fa7c4c309448-kube-proxy\") pod \"kube-proxy-7wbfs\" (UID: \"fad0f99a-9792-4603-b5d4-fa7c4c309448\") " pod="kube-system/kube-proxy-7wbfs"
	Oct 18 15:06:16 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:16.541573    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fad0f99a-9792-4603-b5d4-fa7c4c309448-lib-modules\") pod \"kube-proxy-7wbfs\" (UID: \"fad0f99a-9792-4603-b5d4-fa7c4c309448\") " pod="kube-system/kube-proxy-7wbfs"
	Oct 18 15:06:16 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:16.541623    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxl5k\" (UniqueName: \"kubernetes.io/projected/fad0f99a-9792-4603-b5d4-fa7c4c309448-kube-api-access-wxl5k\") pod \"kube-proxy-7wbfs\" (UID: \"fad0f99a-9792-4603-b5d4-fa7c4c309448\") " pod="kube-system/kube-proxy-7wbfs"
	Oct 18 15:06:16 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:16.541654    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fad0f99a-9792-4603-b5d4-fa7c4c309448-xtables-lock\") pod \"kube-proxy-7wbfs\" (UID: \"fad0f99a-9792-4603-b5d4-fa7c4c309448\") " pod="kube-system/kube-proxy-7wbfs"
	Oct 18 15:06:17 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:17.669843    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7wbfs" podStartSLOduration=1.66981839 podStartE2EDuration="1.66981839s" podCreationTimestamp="2025-10-18 15:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:06:17.669711754 +0000 UTC m=+7.154016362" watchObservedRunningTime="2025-10-18 15:06:17.66981839 +0000 UTC m=+7.154122982"
	Oct 18 15:06:17 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:17.679635    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-nvnw6" podStartSLOduration=1.679615111 podStartE2EDuration="1.679615111s" podCreationTimestamp="2025-10-18 15:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:06:17.679529277 +0000 UTC m=+7.163833885" watchObservedRunningTime="2025-10-18 15:06:17.679615111 +0000 UTC m=+7.163919720"
	Oct 18 15:06:27 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:27.768966    1332 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 15:06:27 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:27.822344    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xbsk\" (UniqueName: \"kubernetes.io/projected/c5abd8b2-0b16-413a-893e-e2d2f9e13f7d-kube-api-access-4xbsk\") pod \"coredns-66bc5c9577-dtjgd\" (UID: \"c5abd8b2-0b16-413a-893e-e2d2f9e13f7d\") " pod="kube-system/coredns-66bc5c9577-dtjgd"
	Oct 18 15:06:27 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:27.822392    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ae9fdc8f-0be0-4641-abed-fbbfb8e6b466-tmp\") pod \"storage-provisioner\" (UID: \"ae9fdc8f-0be0-4641-abed-fbbfb8e6b466\") " pod="kube-system/storage-provisioner"
	Oct 18 15:06:27 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:27.822414    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5784\" (UniqueName: \"kubernetes.io/projected/ae9fdc8f-0be0-4641-abed-fbbfb8e6b466-kube-api-access-d5784\") pod \"storage-provisioner\" (UID: \"ae9fdc8f-0be0-4641-abed-fbbfb8e6b466\") " pod="kube-system/storage-provisioner"
	Oct 18 15:06:27 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:27.822437    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5abd8b2-0b16-413a-893e-e2d2f9e13f7d-config-volume\") pod \"coredns-66bc5c9577-dtjgd\" (UID: \"c5abd8b2-0b16-413a-893e-e2d2f9e13f7d\") " pod="kube-system/coredns-66bc5c9577-dtjgd"
	Oct 18 15:06:28 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:28.702165    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dtjgd" podStartSLOduration=12.702142082 podStartE2EDuration="12.702142082s" podCreationTimestamp="2025-10-18 15:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:06:28.701856598 +0000 UTC m=+18.186161207" watchObservedRunningTime="2025-10-18 15:06:28.702142082 +0000 UTC m=+18.186446690"
	Oct 18 15:06:28 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:28.712282    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.712258141 podStartE2EDuration="12.712258141s" podCreationTimestamp="2025-10-18 15:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:06:28.712187939 +0000 UTC m=+18.196492550" watchObservedRunningTime="2025-10-18 15:06:28.712258141 +0000 UTC m=+18.196562751"
	Oct 18 15:06:30 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:30.941769    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvrf8\" (UniqueName: \"kubernetes.io/projected/2ca10c14-7bb9-43cd-9a37-bb2e16dc4b95-kube-api-access-gvrf8\") pod \"busybox\" (UID: \"2ca10c14-7bb9-43cd-9a37-bb2e16dc4b95\") " pod="default/busybox"
	Oct 18 15:06:33 default-k8s-diff-port-489104 kubelet[1332]: I1018 15:06:33.713257    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.677176032 podStartE2EDuration="3.713234249s" podCreationTimestamp="2025-10-18 15:06:30 +0000 UTC" firstStartedPulling="2025-10-18 15:06:31.173683042 +0000 UTC m=+20.657987650" lastFinishedPulling="2025-10-18 15:06:33.20974128 +0000 UTC m=+22.694045867" observedRunningTime="2025-10-18 15:06:33.71320296 +0000 UTC m=+23.197507566" watchObservedRunningTime="2025-10-18 15:06:33.713234249 +0000 UTC m=+23.197538858"
	
	
	==> storage-provisioner [cc9c260fc55ab444262566f1631533bb4e8b48574aa9fb4623b0e314d21b168b] <==
	I1018 15:06:28.170339       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 15:06:28.178851       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 15:06:28.178900       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 15:06:28.181133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:28.185884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 15:06:28.186064       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 15:06:28.186231       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-489104_c6f42113-596a-427f-804a-a1333bbe08d6!
	I1018 15:06:28.186567       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b77dfb48-26a4-4c5e-9880-c5c307861880", APIVersion:"v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-489104_c6f42113-596a-427f-804a-a1333bbe08d6 became leader
	W1018 15:06:28.189084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:28.196252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 15:06:28.286460       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-489104_c6f42113-596a-427f-804a-a1333bbe08d6!
	W1018 15:06:30.199349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:30.203720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:32.207364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:32.211468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:34.215064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:34.218865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:36.222770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:36.227735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:38.231263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:38.236432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:40.240427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:06:40.245790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-489104 -n default-k8s-diff-port-489104
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-489104 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-741831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-741831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (264.635974ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:06:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-741831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-741831
helpers_test.go:243: (dbg) docker inspect newest-cni-741831:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50",
	        "Created": "2025-10-18T15:06:24.424165883Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 353429,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T15:06:24.469758076Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50/hostname",
	        "HostsPath": "/var/lib/docker/containers/80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50/hosts",
	        "LogPath": "/var/lib/docker/containers/80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50/80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50-json.log",
	        "Name": "/newest-cni-741831",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-741831:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-741831",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50",
	                "LowerDir": "/var/lib/docker/overlay2/ccef2c722d5f80b0292930b40c9275b44275a6f6d4de631a2a924b9d5808916e-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ccef2c722d5f80b0292930b40c9275b44275a6f6d4de631a2a924b9d5808916e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ccef2c722d5f80b0292930b40c9275b44275a6f6d4de631a2a924b9d5808916e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ccef2c722d5f80b0292930b40c9275b44275a6f6d4de631a2a924b9d5808916e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-741831",
	                "Source": "/var/lib/docker/volumes/newest-cni-741831/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-741831",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-741831",
	                "name.minikube.sigs.k8s.io": "newest-cni-741831",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e300d1e505992f7a9e2c727113e264993a9f2de54248e4164c9fb607fce47488",
	            "SandboxKey": "/var/run/docker/netns/e300d1e50599",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-741831": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:f4:9a:76:9e:86",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f155453f4f173ba69c6ef6bc9d76a496868feaebcb5b5f9ed955e83061073a43",
	                    "EndpointID": "fd02c1a57a85e2c183f8f2fb600af67d304b901c399c956f1592c8e149dfcf43",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-741831",
	                        "80f647182c95"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-741831 -n newest-cni-741831
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-741831 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-741831 logs -n 25: (1.408820204s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p kubernetes-upgrade-833162                                                                                                                                                                                                                  │ kubernetes-upgrade-833162    │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ addons  │ enable dashboard -p no-preload-165275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p embed-certs-775590 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p no-preload-165275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:06 UTC │
	│ image   │ old-k8s-version-948537 image list --format=json                                                                                                                                                                                               │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ pause   │ -p old-k8s-version-948537 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │                     │
	│ delete  │ -p old-k8s-version-948537                                                                                                                                                                                                                     │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ delete  │ -p old-k8s-version-948537                                                                                                                                                                                                                     │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ delete  │ -p disable-driver-mounts-677415                                                                                                                                                                                                               │ disable-driver-mounts-677415 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p default-k8s-diff-port-489104 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p cert-expiration-327346 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-327346       │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ delete  │ -p cert-expiration-327346                                                                                                                                                                                                                     │ cert-expiration-327346       │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p newest-cni-741831 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-775590 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ stop    │ -p embed-certs-775590 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ image   │ no-preload-165275 image list --format=json                                                                                                                                                                                                    │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ pause   │ -p no-preload-165275 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-489104 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-775590 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ delete  │ -p no-preload-165275                                                                                                                                                                                                                          │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p embed-certs-775590 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-489104 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ delete  │ -p no-preload-165275                                                                                                                                                                                                                          │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p auto-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-741831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:06:44
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:06:44.289735  359679 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:06:44.290047  359679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:06:44.290062  359679 out.go:374] Setting ErrFile to fd 2...
	I1018 15:06:44.290068  359679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:06:44.290265  359679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:06:44.290729  359679 out.go:368] Setting JSON to false
	I1018 15:06:44.291960  359679 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10155,"bootTime":1760789849,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:06:44.292062  359679 start.go:141] virtualization: kvm guest
	I1018 15:06:44.294123  359679 out.go:179] * [auto-034446] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:06:44.295529  359679 notify.go:220] Checking for updates...
	I1018 15:06:44.295599  359679 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:06:44.296944  359679 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:06:44.298335  359679 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:06:44.299577  359679 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 15:06:44.300754  359679 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:06:44.301968  359679 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:06:44.303795  359679 config.go:182] Loaded profile config "default-k8s-diff-port-489104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:44.303939  359679 config.go:182] Loaded profile config "embed-certs-775590": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:44.304046  359679 config.go:182] Loaded profile config "newest-cni-741831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:44.304147  359679 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:06:44.328784  359679 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:06:44.328903  359679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:06:44.401427  359679 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-18 15:06:44.38396711 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:06:44.401586  359679 docker.go:318] overlay module found
	I1018 15:06:44.405108  359679 out.go:179] * Using the docker driver based on user configuration
	I1018 15:06:44.406464  359679 start.go:305] selected driver: docker
	I1018 15:06:44.406494  359679 start.go:925] validating driver "docker" against <nil>
	I1018 15:06:44.406512  359679 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:06:44.410452  359679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:06:44.483368  359679 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-18 15:06:44.473136854 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:06:44.483546  359679 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 15:06:44.483804  359679 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:06:44.485676  359679 out.go:179] * Using Docker driver with root privileges
	I1018 15:06:44.487033  359679 cni.go:84] Creating CNI manager for ""
	I1018 15:06:44.487116  359679 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:06:44.487129  359679 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 15:06:44.487218  359679 start.go:349] cluster config:
	{Name:auto-034446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-034446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1018 15:06:44.488731  359679 out.go:179] * Starting "auto-034446" primary control-plane node in "auto-034446" cluster
	I1018 15:06:44.489895  359679 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:06:44.491149  359679 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:06:44.492377  359679 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:06:44.492412  359679 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 15:06:44.492419  359679 cache.go:58] Caching tarball of preloaded images
	I1018 15:06:44.492482  359679 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 15:06:44.492512  359679 preload.go:233] Found /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 15:06:44.492522  359679 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 15:06:44.492645  359679 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/auto-034446/config.json ...
	I1018 15:06:44.492672  359679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/auto-034446/config.json: {Name:mk58679e3ec6af5c345ace798adcefeb1af6f01d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:44.515439  359679 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:06:44.515468  359679 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 15:06:44.515488  359679 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:06:44.515517  359679 start.go:360] acquireMachinesLock for auto-034446: {Name:mk1c50ccb4aaf0f22be2f7563b64282a41635100 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:06:44.515635  359679 start.go:364] duration metric: took 97.565µs to acquireMachinesLock for "auto-034446"
	I1018 15:06:44.515667  359679 start.go:93] Provisioning new machine with config: &{Name:auto-034446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-034446 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:06:44.515763  359679 start.go:125] createHost starting for "" (driver="docker")
	I1018 15:06:44.313203  352142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:06:44.813138  352142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:06:44.887784  352142 kubeadm.go:1113] duration metric: took 3.707321785s to wait for elevateKubeSystemPrivileges
	I1018 15:06:44.887826  352142 kubeadm.go:402] duration metric: took 15.192901285s to StartCluster
	I1018 15:06:44.887848  352142 settings.go:142] acquiring lock: {Name:mk9d9d72167811b3554435e137ed17137bf54a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:44.887941  352142 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:06:44.889144  352142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/kubeconfig: {Name:mkdc6fbc9be8e57447490b30dd5bb0086611876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:06:44.889383  352142 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 15:06:44.889400  352142 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:06:44.889484  352142 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 15:06:44.889572  352142 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-741831"
	I1018 15:06:44.889599  352142 addons.go:69] Setting default-storageclass=true in profile "newest-cni-741831"
	I1018 15:06:44.889608  352142 config.go:182] Loaded profile config "newest-cni-741831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:44.889617  352142 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-741831"
	I1018 15:06:44.889626  352142 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-741831"
	I1018 15:06:44.889668  352142 host.go:66] Checking if "newest-cni-741831" exists ...
	I1018 15:06:44.890070  352142 cli_runner.go:164] Run: docker container inspect newest-cni-741831 --format={{.State.Status}}
	I1018 15:06:44.890274  352142 cli_runner.go:164] Run: docker container inspect newest-cni-741831 --format={{.State.Status}}
	I1018 15:06:44.894512  352142 out.go:179] * Verifying Kubernetes components...
	I1018 15:06:44.895826  352142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:06:44.914987  352142 addons.go:238] Setting addon default-storageclass=true in "newest-cni-741831"
	I1018 15:06:44.915041  352142 host.go:66] Checking if "newest-cni-741831" exists ...
	I1018 15:06:44.915611  352142 cli_runner.go:164] Run: docker container inspect newest-cni-741831 --format={{.State.Status}}
	I1018 15:06:44.917062  352142 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 15:06:44.918494  352142 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 15:06:44.918520  352142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 15:06:44.918579  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:44.939952  352142 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 15:06:44.939980  352142 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 15:06:44.940054  352142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:44.946086  352142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:44.975185  352142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:45.008992  352142 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 15:06:45.062975  352142 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:06:45.076635  352142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 15:06:45.103377  352142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 15:06:45.223487  352142 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1018 15:06:45.224636  352142 api_server.go:52] waiting for apiserver process to appear ...
	I1018 15:06:45.224695  352142 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:06:45.451164  352142 api_server.go:72] duration metric: took 561.72299ms to wait for apiserver process to appear ...
	I1018 15:06:45.451191  352142 api_server.go:88] waiting for apiserver healthz status ...
	I1018 15:06:45.451211  352142 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 15:06:45.456343  352142 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1018 15:06:45.457332  352142 api_server.go:141] control plane version: v1.34.1
	I1018 15:06:45.457369  352142 api_server.go:131] duration metric: took 6.169529ms to wait for apiserver health ...
	I1018 15:06:45.457381  352142 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:06:45.460980  352142 system_pods.go:59] 5 kube-system pods found
	I1018 15:06:45.461020  352142 system_pods.go:61] "etcd-newest-cni-741831" [12f950c8-4dfa-4ccc-83d6-5610731545be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 15:06:45.461035  352142 system_pods.go:61] "kube-apiserver-newest-cni-741831" [046fc171-edaf-4ada-b09f-a1fc0d2baeee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 15:06:45.461047  352142 system_pods.go:61] "kube-controller-manager-newest-cni-741831" [2a2cd49d-4869-4d98-b7a3-2cf8ffacb083] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 15:06:45.461058  352142 system_pods.go:61] "kube-scheduler-newest-cni-741831" [4c40112d-9d56-49a9-9442-be510c5aaf5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 15:06:45.461066  352142 system_pods.go:61] "storage-provisioner" [29182b74-a02c-4f22-9317-75f93297a124] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 15:06:45.461074  352142 system_pods.go:74] duration metric: took 3.684852ms to wait for pod list to return data ...
	I1018 15:06:45.461086  352142 default_sa.go:34] waiting for default service account to be created ...
	I1018 15:06:45.461367  352142 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 15:06:45.462963  352142 addons.go:514] duration metric: took 573.473039ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 15:06:45.463581  352142 default_sa.go:45] found service account: "default"
	I1018 15:06:45.463605  352142 default_sa.go:55] duration metric: took 2.511666ms for default service account to be created ...
	I1018 15:06:45.463619  352142 kubeadm.go:586] duration metric: took 574.18543ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 15:06:45.463641  352142 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:06:45.465951  352142 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 15:06:45.465979  352142 node_conditions.go:123] node cpu capacity is 8
	I1018 15:06:45.465997  352142 node_conditions.go:105] duration metric: took 2.350313ms to run NodePressure ...
	I1018 15:06:45.466012  352142 start.go:241] waiting for startup goroutines ...
	I1018 15:06:45.727309  352142 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-741831" context rescaled to 1 replicas
	I1018 15:06:45.727363  352142 start.go:246] waiting for cluster config update ...
	I1018 15:06:45.727378  352142 start.go:255] writing updated cluster config ...
	I1018 15:06:45.727757  352142 ssh_runner.go:195] Run: rm -f paused
	I1018 15:06:45.797144  352142 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:06:45.798991  352142 out.go:179] * Done! kubectl is now configured to use "newest-cni-741831" cluster and "default" namespace by default
	I1018 15:06:41.657458  358344 out.go:252] * Restarting existing docker container for "embed-certs-775590" ...
	I1018 15:06:41.657567  358344 cli_runner.go:164] Run: docker start embed-certs-775590
	I1018 15:06:41.955995  358344 cli_runner.go:164] Run: docker container inspect embed-certs-775590 --format={{.State.Status}}
	I1018 15:06:41.976945  358344 kic.go:430] container "embed-certs-775590" state is running.
	I1018 15:06:41.977329  358344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-775590
	I1018 15:06:41.996750  358344 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/embed-certs-775590/config.json ...
	I1018 15:06:41.997015  358344 machine.go:93] provisionDockerMachine start ...
	I1018 15:06:41.997080  358344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:06:42.017411  358344 main.go:141] libmachine: Using SSH client type: native
	I1018 15:06:42.017646  358344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1018 15:06:42.017657  358344 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:06:42.018278  358344 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38094->127.0.0.1:33088: read: connection reset by peer
	I1018 15:06:45.184813  358344 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-775590
	
	I1018 15:06:45.186066  358344 ubuntu.go:182] provisioning hostname "embed-certs-775590"
	I1018 15:06:45.186153  358344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:06:45.213356  358344 main.go:141] libmachine: Using SSH client type: native
	I1018 15:06:45.213661  358344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1018 15:06:45.213688  358344 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-775590 && echo "embed-certs-775590" | sudo tee /etc/hostname
	I1018 15:06:45.387964  358344 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-775590
	
	I1018 15:06:45.388076  358344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:06:45.411418  358344 main.go:141] libmachine: Using SSH client type: native
	I1018 15:06:45.411720  358344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1018 15:06:45.411749  358344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-775590' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-775590/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-775590' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:06:45.558429  358344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:06:45.558483  358344 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 15:06:45.558520  358344 ubuntu.go:190] setting up certificates
	I1018 15:06:45.558535  358344 provision.go:84] configureAuth start
	I1018 15:06:45.558612  358344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-775590
	I1018 15:06:45.581196  358344 provision.go:143] copyHostCerts
	I1018 15:06:45.581301  358344 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem, removing ...
	I1018 15:06:45.581325  358344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:06:45.581417  358344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 15:06:45.581570  358344 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem, removing ...
	I1018 15:06:45.581586  358344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:06:45.581631  358344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 15:06:45.581726  358344 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem, removing ...
	I1018 15:06:45.581739  358344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:06:45.581778  358344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 15:06:45.581954  358344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.embed-certs-775590 san=[127.0.0.1 192.168.76.2 embed-certs-775590 localhost minikube]
	I1018 15:06:45.931554  358344 provision.go:177] copyRemoteCerts
	I1018 15:06:45.931614  358344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:06:45.931668  358344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:06:45.955342  358344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/embed-certs-775590/id_rsa Username:docker}
	I1018 15:06:46.059785  358344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:06:46.083112  358344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 15:06:46.104018  358344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 15:06:46.125832  358344 provision.go:87] duration metric: took 567.277155ms to configureAuth
	I1018 15:06:46.125866  358344 ubuntu.go:206] setting minikube options for container-runtime
	I1018 15:06:46.126106  358344 config.go:182] Loaded profile config "embed-certs-775590": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:46.126237  358344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:06:46.150450  358344 main.go:141] libmachine: Using SSH client type: native
	I1018 15:06:46.150702  358344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1018 15:06:46.150725  358344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.936983205Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.93795619Z" level=info msg="Ran pod sandbox a739fdc115408b847cd8b684fa3b083e7a277b8d92f8d6724290788cde94d57c with infra container: kube-system/kube-proxy-cgl2t/POD" id=b4a2cf5e-e524-4447-96fc-877eb5c0800c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.939513177Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=4b0be756-4548-4fbf-9c54-18ba4a199ce8 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.940715503Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=cb42b8aa-bbe1-43cf-bf32-3ab755bfe8c2 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.941684153Z" level=info msg="Running pod sandbox: kube-system/kindnet-pj5dl/POD" id=a344c521-e41c-4267-b886-1dd0297ac443 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.941767645Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.944927021Z" level=info msg="Creating container: kube-system/kube-proxy-cgl2t/kube-proxy" id=25f661bf-6a1f-4f8c-afc1-ff7d427faeb9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.94524745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.945971336Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a344c521-e41c-4267-b886-1dd0297ac443 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.949176443Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.950215671Z" level=info msg="Ran pod sandbox 000379ff8383874bae931b93ff29303077ead5a8a687948182d3720b79b8a25b with infra container: kube-system/kindnet-pj5dl/POD" id=a344c521-e41c-4267-b886-1dd0297ac443 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.951124007Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.951625276Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=71b1ab2d-f441-45d9-9a1a-c649f5dc28fd name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.951883204Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.952557973Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=33cf7d94-ea55-45bd-a5e7-d73d5489932a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.957902935Z" level=info msg="Creating container: kube-system/kindnet-pj5dl/kindnet-cni" id=270ad417-8f29-4fce-a8d1-e86008534867 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.958226416Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.962423222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.96282862Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.984872177Z" level=info msg="Created container f4332f182160bffa772de18d09a15e4e4a4306895d7a73e98fe89470044f6273: kube-system/kindnet-pj5dl/kindnet-cni" id=270ad417-8f29-4fce-a8d1-e86008534867 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.985582916Z" level=info msg="Starting container: f4332f182160bffa772de18d09a15e4e4a4306895d7a73e98fe89470044f6273" id=fc3d6657-7d44-4db0-803a-b414308fe4f3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.987391574Z" level=info msg="Started container" PID=1606 containerID=f4332f182160bffa772de18d09a15e4e4a4306895d7a73e98fe89470044f6273 description=kube-system/kindnet-pj5dl/kindnet-cni id=fc3d6657-7d44-4db0-803a-b414308fe4f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=000379ff8383874bae931b93ff29303077ead5a8a687948182d3720b79b8a25b
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.990257955Z" level=info msg="Created container 0cf2462e8787abccdc02dc66523de15b89c6deaf6b5690738d271ee224ddab63: kube-system/kube-proxy-cgl2t/kube-proxy" id=25f661bf-6a1f-4f8c-afc1-ff7d427faeb9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.990859688Z" level=info msg="Starting container: 0cf2462e8787abccdc02dc66523de15b89c6deaf6b5690738d271ee224ddab63" id=d4847622-2632-4ca7-a039-6742b3641dc3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:06:45 newest-cni-741831 crio[775]: time="2025-10-18T15:06:45.993690298Z" level=info msg="Started container" PID=1607 containerID=0cf2462e8787abccdc02dc66523de15b89c6deaf6b5690738d271ee224ddab63 description=kube-system/kube-proxy-cgl2t/kube-proxy id=d4847622-2632-4ca7-a039-6742b3641dc3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a739fdc115408b847cd8b684fa3b083e7a277b8d92f8d6724290788cde94d57c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f4332f182160b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   000379ff83838       kindnet-pj5dl                               kube-system
	0cf2462e8787a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   a739fdc115408       kube-proxy-cgl2t                            kube-system
	d9480d779b339       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   11 seconds ago      Running             kube-apiserver            0                   85749bacbc259       kube-apiserver-newest-cni-741831            kube-system
	c802da290eff9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   11 seconds ago      Running             kube-controller-manager   0                   99e15634ed6c8       kube-controller-manager-newest-cni-741831   kube-system
	d668d5116e05d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   11 seconds ago      Running             etcd                      0                   ca9ddd71843d3       etcd-newest-cni-741831                      kube-system
	75adc9d3030c0       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   11 seconds ago      Running             kube-scheduler            0                   fffcf81c375a7       kube-scheduler-newest-cni-741831            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-741831
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-741831
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=newest-cni-741831
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_06_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:06:37 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-741831
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:06:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:06:40 +0000   Sat, 18 Oct 2025 15:06:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:06:40 +0000   Sat, 18 Oct 2025 15:06:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:06:40 +0000   Sat, 18 Oct 2025 15:06:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 15:06:40 +0000   Sat, 18 Oct 2025 15:06:35 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-741831
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                93172a2b-ef45-4eea-9f95-aa90d7a726bd
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-741831                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9s
	  kube-system                 kindnet-pj5dl                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-741831             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-741831    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-cgl2t                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-741831             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 1s    kube-proxy       
	  Normal  Starting                 7s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s    kubelet          Node newest-cni-741831 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet          Node newest-cni-741831 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet          Node newest-cni-741831 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-741831 event: Registered Node newest-cni-741831 in Controller
	
	
	==> dmesg <==
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [d668d5116e05deb4cc88604932be5d5ced5e2d6ad33350cbb717d59dc17134db] <==
	{"level":"warn","ts":"2025-10-18T15:06:36.840258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.846606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.853982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.860638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.867863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.874613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.881564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.889337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.896010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.905242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.913424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.919232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.926520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.933694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.940363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.947396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.953728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.960828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.967531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.973811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.988250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:36.996009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:37.003045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:37.060194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51132","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T15:06:47.564349Z","caller":"traceutil/trace.go:172","msg":"trace[1142649659] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"103.617765ms","start":"2025-10-18T15:06:47.460708Z","end":"2025-10-18T15:06:47.564326Z","steps":["trace[1142649659] 'process raft request'  (duration: 103.460799ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:06:47 up  2:49,  0 user,  load average: 3.74, 2.99, 2.02
	Linux newest-cni-741831 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f4332f182160bffa772de18d09a15e4e4a4306895d7a73e98fe89470044f6273] <==
	I1018 15:06:46.257058       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 15:06:46.257381       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1018 15:06:46.257524       1 main.go:148] setting mtu 1500 for CNI 
	I1018 15:06:46.257542       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 15:06:46.257560       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T15:06:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 15:06:46.463926       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 15:06:46.464082       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 15:06:46.464103       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 15:06:46.464368       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 15:06:47.101373       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 15:06:47.101406       1 metrics.go:72] Registering metrics
	I1018 15:06:47.101470       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [d9480d779b339bdbd29f0ac45fb7365f0c1bf6b572efd15e64bd406369eb6957] <==
	I1018 15:06:37.563702       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 15:06:37.563709       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 15:06:37.563716       1 cache.go:39] Caches are synced for autoregister controller
	I1018 15:06:37.566251       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 15:06:37.567172       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:06:37.575843       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:06:37.576408       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 15:06:37.603213       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 15:06:38.467775       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 15:06:38.471697       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 15:06:38.471716       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:06:39.017941       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:06:39.071384       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:06:39.172192       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 15:06:39.178866       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1018 15:06:39.180110       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 15:06:39.184707       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 15:06:39.505929       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 15:06:40.257606       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 15:06:40.272151       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 15:06:40.285189       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 15:06:45.211556       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 15:06:45.560790       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:06:45.565457       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:06:45.608989       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [c802da290eff9e87d2007ded186d16d7e0987c92ba0674612248eb85c11f5ab6] <==
	I1018 15:06:44.470611       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 15:06:44.470630       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-741831" podCIDRs=["10.42.0.0/24"]
	I1018 15:06:44.477519       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 15:06:44.504239       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 15:06:44.504826       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 15:06:44.504847       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 15:06:44.506118       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 15:06:44.506138       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 15:06:44.506145       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 15:06:44.506237       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 15:06:44.506952       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 15:06:44.507885       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 15:06:44.507933       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 15:06:44.507956       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 15:06:44.507993       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 15:06:44.508031       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 15:06:44.508210       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 15:06:44.508454       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 15:06:44.508609       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 15:06:44.510804       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 15:06:44.511642       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 15:06:44.515194       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 15:06:44.516382       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 15:06:44.523906       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 15:06:44.528234       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [0cf2462e8787abccdc02dc66523de15b89c6deaf6b5690738d271ee224ddab63] <==
	I1018 15:06:46.035064       1 server_linux.go:53] "Using iptables proxy"
	I1018 15:06:46.098269       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 15:06:46.199214       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 15:06:46.199257       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1018 15:06:46.199371       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 15:06:46.226316       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 15:06:46.226402       1 server_linux.go:132] "Using iptables Proxier"
	I1018 15:06:46.234783       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 15:06:46.235254       1 server.go:527] "Version info" version="v1.34.1"
	I1018 15:06:46.235276       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:06:46.237393       1 config.go:200] "Starting service config controller"
	I1018 15:06:46.237415       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 15:06:46.237455       1 config.go:106] "Starting endpoint slice config controller"
	I1018 15:06:46.237464       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 15:06:46.237621       1 config.go:309] "Starting node config controller"
	I1018 15:06:46.237619       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 15:06:46.237660       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 15:06:46.237663       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 15:06:46.237669       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 15:06:46.338125       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 15:06:46.338166       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 15:06:46.338125       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [75adc9d3030c093e17f83e5eafabd2dc37892575e57cb774782e44d21fc6f818] <==
	E1018 15:06:37.531732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 15:06:37.531784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 15:06:37.531961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 15:06:37.531979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 15:06:37.531970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 15:06:37.532029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 15:06:37.532331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 15:06:37.532397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 15:06:37.532430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 15:06:37.532453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 15:06:37.532510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 15:06:37.532517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 15:06:37.532585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 15:06:38.365662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 15:06:38.390154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 15:06:38.401690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 15:06:38.441498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 15:06:38.456049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 15:06:38.508308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 15:06:38.609318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 15:06:38.679896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 15:06:38.720218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 15:06:38.768565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 15:06:38.898800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1018 15:06:40.626116       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 15:06:40 newest-cni-741831 kubelet[1323]: I1018 15:06:40.422741    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7750874bb7a48ed216b075b6e05a50e-kubeconfig\") pod \"kube-scheduler-newest-cni-741831\" (UID: \"f7750874bb7a48ed216b075b6e05a50e\") " pod="kube-system/kube-scheduler-newest-cni-741831"
	Oct 18 15:06:40 newest-cni-741831 kubelet[1323]: I1018 15:06:40.422775    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85b9a2e679edf97e40913d72cad76d4d-etc-ca-certificates\") pod \"kube-apiserver-newest-cni-741831\" (UID: \"85b9a2e679edf97e40913d72cad76d4d\") " pod="kube-system/kube-apiserver-newest-cni-741831"
	Oct 18 15:06:40 newest-cni-741831 kubelet[1323]: I1018 15:06:40.422820    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/32a560f7857395155a2235dd9a86fc18-flexvolume-dir\") pod \"kube-controller-manager-newest-cni-741831\" (UID: \"32a560f7857395155a2235dd9a86fc18\") " pod="kube-system/kube-controller-manager-newest-cni-741831"
	Oct 18 15:06:41 newest-cni-741831 kubelet[1323]: I1018 15:06:41.115033    1323 apiserver.go:52] "Watching apiserver"
	Oct 18 15:06:41 newest-cni-741831 kubelet[1323]: I1018 15:06:41.121122    1323 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 15:06:41 newest-cni-741831 kubelet[1323]: I1018 15:06:41.192791    1323 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-741831"
	Oct 18 15:06:41 newest-cni-741831 kubelet[1323]: I1018 15:06:41.193622    1323 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-741831"
	Oct 18 15:06:41 newest-cni-741831 kubelet[1323]: E1018 15:06:41.208713    1323 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-741831\" already exists" pod="kube-system/etcd-newest-cni-741831"
	Oct 18 15:06:41 newest-cni-741831 kubelet[1323]: E1018 15:06:41.212007    1323 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-741831\" already exists" pod="kube-system/kube-apiserver-newest-cni-741831"
	Oct 18 15:06:41 newest-cni-741831 kubelet[1323]: I1018 15:06:41.248891    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-741831" podStartSLOduration=3.248833469 podStartE2EDuration="3.248833469s" podCreationTimestamp="2025-10-18 15:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:06:41.225575753 +0000 UTC m=+1.186918636" watchObservedRunningTime="2025-10-18 15:06:41.248833469 +0000 UTC m=+1.210176353"
	Oct 18 15:06:41 newest-cni-741831 kubelet[1323]: I1018 15:06:41.272861    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-741831" podStartSLOduration=1.272837516 podStartE2EDuration="1.272837516s" podCreationTimestamp="2025-10-18 15:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:06:41.250708005 +0000 UTC m=+1.212050880" watchObservedRunningTime="2025-10-18 15:06:41.272837516 +0000 UTC m=+1.234180399"
	Oct 18 15:06:41 newest-cni-741831 kubelet[1323]: I1018 15:06:41.288590    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-741831" podStartSLOduration=1.2885637189999999 podStartE2EDuration="1.288563719s" podCreationTimestamp="2025-10-18 15:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:06:41.273399886 +0000 UTC m=+1.234742769" watchObservedRunningTime="2025-10-18 15:06:41.288563719 +0000 UTC m=+1.249906602"
	Oct 18 15:06:41 newest-cni-741831 kubelet[1323]: I1018 15:06:41.302837    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-741831" podStartSLOduration=1.302812915 podStartE2EDuration="1.302812915s" podCreationTimestamp="2025-10-18 15:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:06:41.289566899 +0000 UTC m=+1.250909782" watchObservedRunningTime="2025-10-18 15:06:41.302812915 +0000 UTC m=+1.264155798"
	Oct 18 15:06:44 newest-cni-741831 kubelet[1323]: I1018 15:06:44.503767    1323 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 15:06:44 newest-cni-741831 kubelet[1323]: I1018 15:06:44.504609    1323 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 15:06:45 newest-cni-741831 kubelet[1323]: I1018 15:06:45.660019    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e790462a-4f99-4636-aa73-b8cf26812e75-kube-proxy\") pod \"kube-proxy-cgl2t\" (UID: \"e790462a-4f99-4636-aa73-b8cf26812e75\") " pod="kube-system/kube-proxy-cgl2t"
	Oct 18 15:06:45 newest-cni-741831 kubelet[1323]: I1018 15:06:45.660070    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e790462a-4f99-4636-aa73-b8cf26812e75-xtables-lock\") pod \"kube-proxy-cgl2t\" (UID: \"e790462a-4f99-4636-aa73-b8cf26812e75\") " pod="kube-system/kube-proxy-cgl2t"
	Oct 18 15:06:45 newest-cni-741831 kubelet[1323]: I1018 15:06:45.660092    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/338db4d1-7623-42cc-ac47-40e8f34baf31-xtables-lock\") pod \"kindnet-pj5dl\" (UID: \"338db4d1-7623-42cc-ac47-40e8f34baf31\") " pod="kube-system/kindnet-pj5dl"
	Oct 18 15:06:45 newest-cni-741831 kubelet[1323]: I1018 15:06:45.660116    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/338db4d1-7623-42cc-ac47-40e8f34baf31-cni-cfg\") pod \"kindnet-pj5dl\" (UID: \"338db4d1-7623-42cc-ac47-40e8f34baf31\") " pod="kube-system/kindnet-pj5dl"
	Oct 18 15:06:45 newest-cni-741831 kubelet[1323]: I1018 15:06:45.660210    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e790462a-4f99-4636-aa73-b8cf26812e75-lib-modules\") pod \"kube-proxy-cgl2t\" (UID: \"e790462a-4f99-4636-aa73-b8cf26812e75\") " pod="kube-system/kube-proxy-cgl2t"
	Oct 18 15:06:45 newest-cni-741831 kubelet[1323]: I1018 15:06:45.660265    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrxpm\" (UniqueName: \"kubernetes.io/projected/e790462a-4f99-4636-aa73-b8cf26812e75-kube-api-access-jrxpm\") pod \"kube-proxy-cgl2t\" (UID: \"e790462a-4f99-4636-aa73-b8cf26812e75\") " pod="kube-system/kube-proxy-cgl2t"
	Oct 18 15:06:45 newest-cni-741831 kubelet[1323]: I1018 15:06:45.660296    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/338db4d1-7623-42cc-ac47-40e8f34baf31-lib-modules\") pod \"kindnet-pj5dl\" (UID: \"338db4d1-7623-42cc-ac47-40e8f34baf31\") " pod="kube-system/kindnet-pj5dl"
	Oct 18 15:06:45 newest-cni-741831 kubelet[1323]: I1018 15:06:45.660321    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v88h\" (UniqueName: \"kubernetes.io/projected/338db4d1-7623-42cc-ac47-40e8f34baf31-kube-api-access-4v88h\") pod \"kindnet-pj5dl\" (UID: \"338db4d1-7623-42cc-ac47-40e8f34baf31\") " pod="kube-system/kindnet-pj5dl"
	Oct 18 15:06:46 newest-cni-741831 kubelet[1323]: I1018 15:06:46.240707    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cgl2t" podStartSLOduration=1.240682834 podStartE2EDuration="1.240682834s" podCreationTimestamp="2025-10-18 15:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:06:46.226276032 +0000 UTC m=+6.187618915" watchObservedRunningTime="2025-10-18 15:06:46.240682834 +0000 UTC m=+6.202025717"
	Oct 18 15:06:46 newest-cni-741831 kubelet[1323]: I1018 15:06:46.334400    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-pj5dl" podStartSLOduration=1.334371469 podStartE2EDuration="1.334371469s" podCreationTimestamp="2025-10-18 15:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 15:06:46.241136305 +0000 UTC m=+6.202479188" watchObservedRunningTime="2025-10-18 15:06:46.334371469 +0000 UTC m=+6.295714363"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-741831 -n newest-cni-741831
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-741831 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-dksbs storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-741831 describe pod coredns-66bc5c9577-dksbs storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-741831 describe pod coredns-66bc5c9577-dksbs storage-provisioner: exit status 1 (65.845677ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-dksbs" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-741831 describe pod coredns-66bc5c9577-dksbs storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-741831 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-741831 --alsologtostderr -v=1: exit status 80 (1.895151345s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-741831 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 15:07:04.808503  368031 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:07:04.808940  368031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:07:04.808987  368031 out.go:374] Setting ErrFile to fd 2...
	I1018 15:07:04.809009  368031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:07:04.809413  368031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:07:04.809825  368031 out.go:368] Setting JSON to false
	I1018 15:07:04.809943  368031 mustload.go:65] Loading cluster: newest-cni-741831
	I1018 15:07:04.810412  368031 config.go:182] Loaded profile config "newest-cni-741831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:04.811104  368031 cli_runner.go:164] Run: docker container inspect newest-cni-741831 --format={{.State.Status}}
	I1018 15:07:04.836102  368031 host.go:66] Checking if "newest-cni-741831" exists ...
	I1018 15:07:04.836595  368031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:07:04.905469  368031 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:88 SystemTime:2025-10-18 15:07:04.892797268 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:07:04.906347  368031 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-741831 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 15:07:04.999467  368031 out.go:179] * Pausing node newest-cni-741831 ... 
	I1018 15:07:05.019581  368031 host.go:66] Checking if "newest-cni-741831" exists ...
	I1018 15:07:05.019947  368031 ssh_runner.go:195] Run: systemctl --version
	I1018 15:07:05.020010  368031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:07:05.039683  368031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:07:05.139633  368031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:07:05.153599  368031 pause.go:52] kubelet running: true
	I1018 15:07:05.153690  368031 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:07:05.296355  368031 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:07:05.296452  368031 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:07:05.375425  368031 cri.go:89] found id: "bd9c45aa764304c606046593f542a1af165aa1d2c8384b462aa7dcb38f337591"
	I1018 15:07:05.375454  368031 cri.go:89] found id: "ad5bdf93a6fd51d0e0a0d2091f1b59559b68981cabe5ac45c5a8502f33c102ad"
	I1018 15:07:05.375460  368031 cri.go:89] found id: "ad1e0d015fdf1368ded1825253a96c3951279196aa718df2645c2024b26f3fc1"
	I1018 15:07:05.375465  368031 cri.go:89] found id: "3aaf72a8fab306d03fd39b588902cacfd12938e06467694af5c1db7254c80b0d"
	I1018 15:07:05.375470  368031 cri.go:89] found id: "474f32f68077ed4928d09f0824bece6ea546498605afd71ed210110e677a4f30"
	I1018 15:07:05.375474  368031 cri.go:89] found id: "dd0e3d8eec101af0cdf8be9c93b2425a97577f086a02a3133b60fa3686e3c82a"
	I1018 15:07:05.375479  368031 cri.go:89] found id: ""
	I1018 15:07:05.375524  368031 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:07:05.389424  368031 retry.go:31] will retry after 192.798367ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:07:05Z" level=error msg="open /run/runc: no such file or directory"
	I1018 15:07:05.582969  368031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:07:05.600480  368031 pause.go:52] kubelet running: false
	I1018 15:07:05.600555  368031 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:07:05.752793  368031 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:07:05.752894  368031 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:07:05.832795  368031 cri.go:89] found id: "bd9c45aa764304c606046593f542a1af165aa1d2c8384b462aa7dcb38f337591"
	I1018 15:07:05.832821  368031 cri.go:89] found id: "ad5bdf93a6fd51d0e0a0d2091f1b59559b68981cabe5ac45c5a8502f33c102ad"
	I1018 15:07:05.832826  368031 cri.go:89] found id: "ad1e0d015fdf1368ded1825253a96c3951279196aa718df2645c2024b26f3fc1"
	I1018 15:07:05.832831  368031 cri.go:89] found id: "3aaf72a8fab306d03fd39b588902cacfd12938e06467694af5c1db7254c80b0d"
	I1018 15:07:05.832835  368031 cri.go:89] found id: "474f32f68077ed4928d09f0824bece6ea546498605afd71ed210110e677a4f30"
	I1018 15:07:05.832840  368031 cri.go:89] found id: "dd0e3d8eec101af0cdf8be9c93b2425a97577f086a02a3133b60fa3686e3c82a"
	I1018 15:07:05.832844  368031 cri.go:89] found id: ""
	I1018 15:07:05.832890  368031 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:07:05.845680  368031 retry.go:31] will retry after 492.134961ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:07:05Z" level=error msg="open /run/runc: no such file or directory"
	I1018 15:07:06.338067  368031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:07:06.354670  368031 pause.go:52] kubelet running: false
	I1018 15:07:06.354742  368031 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:07:06.507090  368031 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:07:06.507179  368031 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:07:06.600572  368031 cri.go:89] found id: "bd9c45aa764304c606046593f542a1af165aa1d2c8384b462aa7dcb38f337591"
	I1018 15:07:06.600599  368031 cri.go:89] found id: "ad5bdf93a6fd51d0e0a0d2091f1b59559b68981cabe5ac45c5a8502f33c102ad"
	I1018 15:07:06.600620  368031 cri.go:89] found id: "ad1e0d015fdf1368ded1825253a96c3951279196aa718df2645c2024b26f3fc1"
	I1018 15:07:06.600625  368031 cri.go:89] found id: "3aaf72a8fab306d03fd39b588902cacfd12938e06467694af5c1db7254c80b0d"
	I1018 15:07:06.600629  368031 cri.go:89] found id: "474f32f68077ed4928d09f0824bece6ea546498605afd71ed210110e677a4f30"
	I1018 15:07:06.600633  368031 cri.go:89] found id: "dd0e3d8eec101af0cdf8be9c93b2425a97577f086a02a3133b60fa3686e3c82a"
	I1018 15:07:06.600637  368031 cri.go:89] found id: ""
	I1018 15:07:06.600685  368031 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:07:06.619759  368031 out.go:203] 
	W1018 15:07:06.622951  368031 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:07:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:07:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 15:07:06.622977  368031 out.go:285] * 
	* 
	W1018 15:07:06.629730  368031 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 15:07:06.630978  368031 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-741831 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-741831
helpers_test.go:243: (dbg) docker inspect newest-cni-741831:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50",
	        "Created": "2025-10-18T15:06:24.424165883Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 364193,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T15:06:51.921342323Z",
	            "FinishedAt": "2025-10-18T15:06:50.972031863Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50/hostname",
	        "HostsPath": "/var/lib/docker/containers/80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50/hosts",
	        "LogPath": "/var/lib/docker/containers/80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50/80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50-json.log",
	        "Name": "/newest-cni-741831",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-741831:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-741831",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50",
	                "LowerDir": "/var/lib/docker/overlay2/ccef2c722d5f80b0292930b40c9275b44275a6f6d4de631a2a924b9d5808916e-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ccef2c722d5f80b0292930b40c9275b44275a6f6d4de631a2a924b9d5808916e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ccef2c722d5f80b0292930b40c9275b44275a6f6d4de631a2a924b9d5808916e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ccef2c722d5f80b0292930b40c9275b44275a6f6d4de631a2a924b9d5808916e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-741831",
	                "Source": "/var/lib/docker/volumes/newest-cni-741831/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-741831",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-741831",
	                "name.minikube.sigs.k8s.io": "newest-cni-741831",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e83de6f0bc9a1243b83bad4d2ce36aa3ac43695774fe2be4b01df50cc6fb6b39",
	            "SandboxKey": "/var/run/docker/netns/e83de6f0bc9a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-741831": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:cd:dd:bd:cd:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f155453f4f173ba69c6ef6bc9d76a496868feaebcb5b5f9ed955e83061073a43",
	                    "EndpointID": "415494b5751db51902f0d37a4ecc4e2eb888c0ae034ba2cc08d9c68245aa1d76",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-741831",
	                        "80f647182c95"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-741831 -n newest-cni-741831
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-741831 -n newest-cni-741831: exit status 2 (372.754111ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-741831 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-741831 logs -n 25: (1.118450323s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-948537                                                                                                                                                                                                                     │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ delete  │ -p disable-driver-mounts-677415                                                                                                                                                                                                               │ disable-driver-mounts-677415 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p default-k8s-diff-port-489104 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p cert-expiration-327346 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-327346       │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ delete  │ -p cert-expiration-327346                                                                                                                                                                                                                     │ cert-expiration-327346       │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p newest-cni-741831 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-775590 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ stop    │ -p embed-certs-775590 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ image   │ no-preload-165275 image list --format=json                                                                                                                                                                                                    │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ pause   │ -p no-preload-165275 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-489104 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-775590 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ delete  │ -p no-preload-165275                                                                                                                                                                                                                          │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p embed-certs-775590 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-489104 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ delete  │ -p no-preload-165275                                                                                                                                                                                                                          │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p auto-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-741831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ stop    │ -p newest-cni-741831 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-741831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p newest-cni-741831 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:07 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-489104 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p default-k8s-diff-port-489104 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ image   │ newest-cni-741831 image list --format=json                                                                                                                                                                                                    │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ pause   │ -p newest-cni-741831 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:06:59
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:06:59.872233  366690 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:06:59.872747  366690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:06:59.872757  366690 out.go:374] Setting ErrFile to fd 2...
	I1018 15:06:59.872764  366690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:06:59.873131  366690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:06:59.873705  366690 out.go:368] Setting JSON to false
	I1018 15:06:59.875300  366690 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10171,"bootTime":1760789849,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:06:59.875429  366690 start.go:141] virtualization: kvm guest
	I1018 15:06:59.877582  366690 out.go:179] * [default-k8s-diff-port-489104] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:06:59.879381  366690 notify.go:220] Checking for updates...
	I1018 15:06:59.879408  366690 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:06:59.883115  366690 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:06:59.884420  366690 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:06:59.886657  366690 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 15:06:59.887948  366690 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:06:59.889165  366690 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:06:59.891121  366690 config.go:182] Loaded profile config "default-k8s-diff-port-489104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:59.891809  366690 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:06:59.924074  366690 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:06:59.924193  366690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:07:00.014603  366690 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-18 15:06:59.998291777 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:07:00.014822  366690 docker.go:318] overlay module found
	I1018 15:07:00.017160  366690 out.go:179] * Using the docker driver based on existing profile
	I1018 15:07:00.019336  366690 start.go:305] selected driver: docker
	I1018 15:07:00.019354  366690 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-489104 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-489104 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:07:00.019515  366690 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:07:00.020338  366690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:07:00.091525  366690 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-18 15:07:00.081133385 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:07:00.091959  366690 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:07:00.091999  366690 cni.go:84] Creating CNI manager for ""
	I1018 15:07:00.092065  366690 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:07:00.092123  366690 start.go:349] cluster config:
	{Name:default-k8s-diff-port-489104 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-489104 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:07:00.094372  366690 out.go:179] * Starting "default-k8s-diff-port-489104" primary control-plane node in "default-k8s-diff-port-489104" cluster
	I1018 15:07:00.095571  366690 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:07:00.096725  366690 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:07:00.097794  366690 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:07:00.097861  366690 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 15:07:00.097855  366690 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 15:07:00.097874  366690 cache.go:58] Caching tarball of preloaded images
	I1018 15:07:00.097989  366690 preload.go:233] Found /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 15:07:00.098004  366690 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 15:07:00.098137  366690 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/default-k8s-diff-port-489104/config.json ...
	I1018 15:07:00.121397  366690 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:07:00.121419  366690 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 15:07:00.121435  366690 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:07:00.121463  366690 start.go:360] acquireMachinesLock for default-k8s-diff-port-489104: {Name:mkc98cd1d4086725a8dd8ef11198f9481b2bbd15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:07:00.121546  366690 start.go:364] duration metric: took 43.139µs to acquireMachinesLock for "default-k8s-diff-port-489104"
	I1018 15:07:00.121569  366690 start.go:96] Skipping create...Using existing machine configuration
	I1018 15:07:00.121581  366690 fix.go:54] fixHost starting: 
	I1018 15:07:00.121808  366690 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-489104 --format={{.State.Status}}
	I1018 15:07:00.140247  366690 fix.go:112] recreateIfNeeded on default-k8s-diff-port-489104: state=Stopped err=<nil>
	W1018 15:07:00.140281  366690 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 15:06:56.617433  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	W1018 15:06:59.118343  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	W1018 15:07:01.141447  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	I1018 15:06:59.486372  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 15:06:59.486406  363917 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 15:06:59.486480  363917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:59.519782  363917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:59.521972  363917 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 15:06:59.522075  363917 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 15:06:59.522184  363917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:59.525273  363917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:59.556357  363917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:59.677169  363917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:06:59.681275  363917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 15:06:59.711883  363917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 15:06:59.715407  363917 api_server.go:52] waiting for apiserver process to appear ...
	I1018 15:06:59.715468  363917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:06:59.729955  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 15:06:59.729985  363917 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 15:06:59.761621  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 15:06:59.761703  363917 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 15:06:59.789977  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 15:06:59.790003  363917 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 15:06:59.810880  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 15:06:59.810946  363917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 15:06:59.838233  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 15:06:59.838260  363917 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 15:06:59.865973  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 15:06:59.866004  363917 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 15:06:59.884891  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 15:06:59.884937  363917 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 15:06:59.909460  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 15:06:59.909841  363917 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 15:06:59.930643  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 15:06:59.930671  363917 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 15:06:59.953230  363917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 15:07:03.409278  363917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.697304993s)
	I1018 15:07:03.409391  363917 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.693900115s)
	I1018 15:07:03.409425  363917 api_server.go:72] duration metric: took 3.973585291s to wait for apiserver process to appear ...
	I1018 15:07:03.409433  363917 api_server.go:88] waiting for apiserver healthz status ...
	I1018 15:07:03.409457  363917 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 15:07:03.409588  363917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.456312979s)
	I1018 15:07:03.409937  363917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.728606372s)
	I1018 15:07:03.411215  363917 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-741831 addons enable metrics-server
	
	I1018 15:07:03.415535  363917 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 15:07:03.415567  363917 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 15:07:03.435041  363917 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 15:07:03.436855  363917 addons.go:514] duration metric: took 4.000782457s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 15:07:03.909626  363917 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 15:07:03.916379  363917 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1018 15:07:03.918535  363917 api_server.go:141] control plane version: v1.34.1
	I1018 15:07:03.918568  363917 api_server.go:131] duration metric: took 509.128473ms to wait for apiserver health ...
	I1018 15:07:03.918578  363917 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:07:03.924287  363917 system_pods.go:59] 8 kube-system pods found
	I1018 15:07:03.924333  363917 system_pods.go:61] "coredns-66bc5c9577-dksbs" [4afc2ce3-9388-42d7-a40e-f7fa9040d77f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 15:07:03.924372  363917 system_pods.go:61] "etcd-newest-cni-741831" [12f950c8-4dfa-4ccc-83d6-5610731545be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 15:07:03.924387  363917 system_pods.go:61] "kindnet-pj5dl" [338db4d1-7623-42cc-ac47-40e8f34baf31] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 15:07:03.924397  363917 system_pods.go:61] "kube-apiserver-newest-cni-741831" [046fc171-edaf-4ada-b09f-a1fc0d2baeee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 15:07:03.924406  363917 system_pods.go:61] "kube-controller-manager-newest-cni-741831" [2a2cd49d-4869-4d98-b7a3-2cf8ffacb083] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 15:07:03.924586  363917 system_pods.go:61] "kube-proxy-cgl2t" [e790462a-4f99-4636-aa73-b8cf26812e75] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 15:07:03.924603  363917 system_pods.go:61] "kube-scheduler-newest-cni-741831" [4c40112d-9d56-49a9-9442-be510c5aaf5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 15:07:03.924639  363917 system_pods.go:61] "storage-provisioner" [29182b74-a02c-4f22-9317-75f93297a124] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 15:07:03.924654  363917 system_pods.go:74] duration metric: took 6.066568ms to wait for pod list to return data ...
	I1018 15:07:03.924668  363917 default_sa.go:34] waiting for default service account to be created ...
	I1018 15:07:03.928517  363917 default_sa.go:45] found service account: "default"
	I1018 15:07:03.928549  363917 default_sa.go:55] duration metric: took 3.868642ms for default service account to be created ...
	I1018 15:07:03.928590  363917 kubeadm.go:586] duration metric: took 4.492749994s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 15:07:03.928626  363917 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:07:03.932060  363917 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 15:07:03.932093  363917 node_conditions.go:123] node cpu capacity is 8
	I1018 15:07:03.932107  363917 node_conditions.go:105] duration metric: took 3.47516ms to run NodePressure ...
	I1018 15:07:03.932124  363917 start.go:241] waiting for startup goroutines ...
	I1018 15:07:03.932134  363917 start.go:246] waiting for cluster config update ...
	I1018 15:07:03.932149  363917 start.go:255] writing updated cluster config ...
	I1018 15:07:03.932472  363917 ssh_runner.go:195] Run: rm -f paused
	I1018 15:07:03.996125  363917 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:07:03.998967  363917 out.go:179] * Done! kubectl is now configured to use "newest-cni-741831" cluster and "default" namespace by default
	I1018 15:07:00.012545  359679 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001279533s
	I1018 15:07:00.016373  359679 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 15:07:00.016571  359679 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 15:07:00.016689  359679 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 15:07:00.016805  359679 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 15:07:02.260943  359679 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.243926997s
	I1018 15:07:03.372090  359679 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.355765694s
	I1018 15:07:00.142085  366690 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-489104" ...
	I1018 15:07:00.142175  366690 cli_runner.go:164] Run: docker start default-k8s-diff-port-489104
	I1018 15:07:00.447143  366690 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-489104 --format={{.State.Status}}
	I1018 15:07:00.482063  366690 kic.go:430] container "default-k8s-diff-port-489104" state is running.
	I1018 15:07:00.483352  366690 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-489104
	I1018 15:07:00.512041  366690 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/default-k8s-diff-port-489104/config.json ...
	I1018 15:07:00.512470  366690 machine.go:93] provisionDockerMachine start ...
	I1018 15:07:00.512553  366690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-489104
	I1018 15:07:00.543650  366690 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:00.544247  366690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1018 15:07:00.544300  366690 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:07:00.545926  366690 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43572->127.0.0.1:33103: read: connection reset by peer
	I1018 15:07:03.707993  366690 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-489104
	
	I1018 15:07:03.708037  366690 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-489104"
	I1018 15:07:03.708108  366690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-489104
	I1018 15:07:03.732219  366690 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:03.732520  366690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1018 15:07:03.732540  366690 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-489104 && echo "default-k8s-diff-port-489104" | sudo tee /etc/hostname
	I1018 15:07:03.907349  366690 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-489104
	
	I1018 15:07:03.907433  366690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-489104
	I1018 15:07:03.933124  366690 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:03.933410  366690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1018 15:07:03.933441  366690 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-489104' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-489104/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-489104' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:07:04.091816  366690 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:07:04.091984  366690 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 15:07:04.092115  366690 ubuntu.go:190] setting up certificates
	I1018 15:07:04.092146  366690 provision.go:84] configureAuth start
	I1018 15:07:04.092230  366690 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-489104
	I1018 15:07:04.119685  366690 provision.go:143] copyHostCerts
	I1018 15:07:04.119970  366690 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem, removing ...
	I1018 15:07:04.120006  366690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:07:04.120084  366690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 15:07:04.120202  366690 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem, removing ...
	I1018 15:07:04.120212  366690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:07:04.120260  366690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 15:07:04.120353  366690 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem, removing ...
	I1018 15:07:04.120360  366690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:07:04.120400  366690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 15:07:04.120479  366690 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-489104 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-489104 localhost minikube]
	I1018 15:07:04.270879  366690 provision.go:177] copyRemoteCerts
	I1018 15:07:04.270984  366690 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:07:04.271037  366690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-489104
	I1018 15:07:04.295683  366690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/default-k8s-diff-port-489104/id_rsa Username:docker}
	I1018 15:07:04.410594  366690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:07:04.442767  366690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 15:07:04.472765  366690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 15:07:04.501475  366690 provision.go:87] duration metric: took 409.310201ms to configureAuth
	I1018 15:07:04.501502  366690 ubuntu.go:206] setting minikube options for container-runtime
	I1018 15:07:04.501661  366690 config.go:182] Loaded profile config "default-k8s-diff-port-489104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:04.501744  366690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-489104
	I1018 15:07:04.527504  366690 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:04.528164  366690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1018 15:07:04.528224  366690 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:07:06.018972  359679 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002564854s
	I1018 15:07:06.033794  359679 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 15:07:06.046683  359679 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 15:07:06.056624  359679 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 15:07:06.056843  359679 kubeadm.go:318] [mark-control-plane] Marking the node auto-034446 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 15:07:06.066286  359679 kubeadm.go:318] [bootstrap-token] Using token: jwy8id.3lefdrff37gw7l4a
	W1018 15:07:03.619770  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	W1018 15:07:06.117215  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.820144201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.826640816Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0c341a41-6d94-4fa9-bf70-b4ab8dd8cdcd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.827246446Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=08b69e25-e169-4640-b3cd-98084a3bd640 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.829862192Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.830957171Z" level=info msg="Ran pod sandbox 9a6e297a5a80cf47c721fd3a14bbf8813ab6a669f3755f12d7f24eed7ea41be7 with infra container: kube-system/kube-proxy-cgl2t/POD" id=0c341a41-6d94-4fa9-bf70-b4ab8dd8cdcd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.831120539Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.832466016Z" level=info msg="Ran pod sandbox 4dc6db69e365c4fa1d61fc14af741e61019dca09e04f8a9f0e8721d5b9840d6c with infra container: kube-system/kindnet-pj5dl/POD" id=08b69e25-e169-4640-b3cd-98084a3bd640 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.832700654Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=08014cce-24e6-4570-a977-d817926ae49a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.833844034Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=00f3d160-44ed-4613-854b-35690ec45bcc name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.834291858Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=dee53644-5fac-41f4-bf46-f8e38297e413 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.835841638Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0cfbef94-150f-4fd9-a08f-0c066725591c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.835932306Z" level=info msg="Creating container: kube-system/kube-proxy-cgl2t/kube-proxy" id=d0ac5192-4ac7-43fe-ae77-874166b77b9c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.836206819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.83722113Z" level=info msg="Creating container: kube-system/kindnet-pj5dl/kindnet-cni" id=8faf631e-cf92-440e-bf94-0971826c78b1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.83935718Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.842656181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.843328001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.844516322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.845311616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.877571952Z" level=info msg="Created container bd9c45aa764304c606046593f542a1af165aa1d2c8384b462aa7dcb38f337591: kube-system/kindnet-pj5dl/kindnet-cni" id=8faf631e-cf92-440e-bf94-0971826c78b1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.87828104Z" level=info msg="Starting container: bd9c45aa764304c606046593f542a1af165aa1d2c8384b462aa7dcb38f337591" id=37fe3bc1-d042-4ef5-b2dc-c507bef4ecb7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.881179049Z" level=info msg="Started container" PID=1016 containerID=bd9c45aa764304c606046593f542a1af165aa1d2c8384b462aa7dcb38f337591 description=kube-system/kindnet-pj5dl/kindnet-cni id=37fe3bc1-d042-4ef5-b2dc-c507bef4ecb7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4dc6db69e365c4fa1d61fc14af741e61019dca09e04f8a9f0e8721d5b9840d6c
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.882275283Z" level=info msg="Created container ad5bdf93a6fd51d0e0a0d2091f1b59559b68981cabe5ac45c5a8502f33c102ad: kube-system/kube-proxy-cgl2t/kube-proxy" id=d0ac5192-4ac7-43fe-ae77-874166b77b9c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.88300161Z" level=info msg="Starting container: ad5bdf93a6fd51d0e0a0d2091f1b59559b68981cabe5ac45c5a8502f33c102ad" id=bd1cb3db-7aba-460c-afb7-3bcb3a8252b2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.886617017Z" level=info msg="Started container" PID=1017 containerID=ad5bdf93a6fd51d0e0a0d2091f1b59559b68981cabe5ac45c5a8502f33c102ad description=kube-system/kube-proxy-cgl2t/kube-proxy id=bd1cb3db-7aba-460c-afb7-3bcb3a8252b2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9a6e297a5a80cf47c721fd3a14bbf8813ab6a669f3755f12d7f24eed7ea41be7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bd9c45aa76430       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   3 seconds ago       Running             kindnet-cni               1                   4dc6db69e365c       kindnet-pj5dl                               kube-system
	ad5bdf93a6fd5       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   3 seconds ago       Running             kube-proxy                1                   9a6e297a5a80c       kube-proxy-cgl2t                            kube-system
	ad1e0d015fdf1       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   469c20253226f       kube-controller-manager-newest-cni-741831   kube-system
	3aaf72a8fab30       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   724f40770a8d3       kube-apiserver-newest-cni-741831            kube-system
	474f32f68077e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   7be288ac37e9d       kube-scheduler-newest-cni-741831            kube-system
	dd0e3d8eec101       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   3965e7e9001b3       etcd-newest-cni-741831                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-741831
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-741831
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=newest-cni-741831
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_06_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:06:37 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-741831
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:07:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:07:02 +0000   Sat, 18 Oct 2025 15:06:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:07:02 +0000   Sat, 18 Oct 2025 15:06:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:07:02 +0000   Sat, 18 Oct 2025 15:06:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 15:07:02 +0000   Sat, 18 Oct 2025 15:06:35 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-741831
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                93172a2b-ef45-4eea-9f95-aa90d7a726bd
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-741831                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-pj5dl                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22s
	  kube-system                 kube-apiserver-newest-cni-741831             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-newest-cni-741831    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-cgl2t                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-scheduler-newest-cni-741831             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 21s              kube-proxy       
	  Normal  Starting                 3s               kube-proxy       
	  Normal  Starting                 27s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s              kubelet          Node newest-cni-741831 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s              kubelet          Node newest-cni-741831 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s              kubelet          Node newest-cni-741831 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23s              node-controller  Node newest-cni-741831 event: Registered Node newest-cni-741831 in Controller
	  Normal  Starting                 9s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)  kubelet          Node newest-cni-741831 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)  kubelet          Node newest-cni-741831 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x8 over 9s)  kubelet          Node newest-cni-741831 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           1s               node-controller  Node newest-cni-741831 event: Registered Node newest-cni-741831 in Controller
	
	
	==> dmesg <==
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [dd0e3d8eec101af0cdf8be9c93b2425a97577f086a02a3133b60fa3686e3c82a] <==
	{"level":"warn","ts":"2025-10-18T15:07:01.718368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.730685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.738416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.746751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.757899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.765577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.776417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.787524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.798126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.809494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.824546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.836016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.845020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.855602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.863254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.870315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.877692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.887198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.895967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.906591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.916496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.935841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.945436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.952884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:02.032475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58920","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:07:07 up  2:49,  0 user,  load average: 5.61, 3.45, 2.19
	Linux newest-cni-741831 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bd9c45aa764304c606046593f542a1af165aa1d2c8384b462aa7dcb38f337591] <==
	I1018 15:07:04.112475       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 15:07:04.130196       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1018 15:07:04.131411       1 main.go:148] setting mtu 1500 for CNI 
	I1018 15:07:04.131456       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 15:07:04.131485       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T15:07:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 15:07:04.340651       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 15:07:04.340687       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 15:07:04.340705       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 15:07:04.340880       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 15:07:04.740814       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 15:07:04.742380       1 metrics.go:72] Registering metrics
	I1018 15:07:04.742787       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [3aaf72a8fab306d03fd39b588902cacfd12938e06467694af5c1db7254c80b0d] <==
	I1018 15:07:02.673144       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 15:07:02.677408       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 15:07:02.673189       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 15:07:02.678051       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 15:07:02.678120       1 aggregator.go:171] initial CRD sync complete...
	I1018 15:07:02.678149       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 15:07:02.678173       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 15:07:02.678195       1 cache.go:39] Caches are synced for autoregister controller
	I1018 15:07:02.690366       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 15:07:02.728328       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:07:02.742062       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 15:07:02.756805       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 15:07:02.756855       1 policy_source.go:240] refreshing policies
	I1018 15:07:02.850052       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 15:07:03.079412       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 15:07:03.124125       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 15:07:03.146654       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:07:03.154291       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:07:03.169445       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 15:07:03.238898       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.142.159"}
	I1018 15:07:03.254399       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.113.99"}
	I1018 15:07:03.574597       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:07:06.464210       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 15:07:06.514239       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 15:07:06.564732       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ad1e0d015fdf1368ded1825253a96c3951279196aa718df2645c2024b26f3fc1] <==
	I1018 15:07:06.037996       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 15:07:06.043598       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 15:07:06.060990       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 15:07:06.061013       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 15:07:06.061075       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 15:07:06.061901       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 15:07:06.062100       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 15:07:06.063280       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 15:07:06.063313       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 15:07:06.065642       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 15:07:06.065669       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 15:07:06.067968       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 15:07:06.067987       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 15:07:06.070278       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 15:07:06.073582       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 15:07:06.074688       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 15:07:06.075745       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 15:07:06.079161       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 15:07:06.082488       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 15:07:06.082585       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 15:07:06.085965       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 15:07:06.089172       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 15:07:06.089181       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 15:07:06.093406       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 15:07:06.093597       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [ad5bdf93a6fd51d0e0a0d2091f1b59559b68981cabe5ac45c5a8502f33c102ad] <==
	I1018 15:07:03.940705       1 server_linux.go:53] "Using iptables proxy"
	I1018 15:07:04.002471       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 15:07:04.103644       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 15:07:04.103691       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1018 15:07:04.103786       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 15:07:04.133268       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 15:07:04.133379       1 server_linux.go:132] "Using iptables Proxier"
	I1018 15:07:04.140857       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 15:07:04.141452       1 server.go:527] "Version info" version="v1.34.1"
	I1018 15:07:04.143116       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:07:04.148512       1 config.go:200] "Starting service config controller"
	I1018 15:07:04.148987       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 15:07:04.149037       1 config.go:309] "Starting node config controller"
	I1018 15:07:04.149499       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 15:07:04.149539       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 15:07:04.149162       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 15:07:04.149689       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 15:07:04.149143       1 config.go:106] "Starting endpoint slice config controller"
	I1018 15:07:04.149756       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 15:07:04.249214       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 15:07:04.250792       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 15:07:04.250816       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [474f32f68077ed4928d09f0824bece6ea546498605afd71ed210110e677a4f30] <==
	I1018 15:07:00.834244       1 serving.go:386] Generated self-signed cert in-memory
	W1018 15:07:02.647789       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 15:07:02.647836       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 15:07:02.647849       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 15:07:02.647866       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 15:07:02.698149       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 15:07:02.698184       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:07:02.701807       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:07:02.701895       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:07:02.703185       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 15:07:02.703573       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 15:07:02.802853       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 15:07:01 newest-cni-741831 kubelet[652]: E1018 15:07:01.794191     652 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-741831\" not found" node="newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: I1018 15:07:02.821317     652 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: I1018 15:07:02.832448     652 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: I1018 15:07:02.835463     652 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: I1018 15:07:02.835559     652 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: I1018 15:07:02.838077     652 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: E1018 15:07:02.844813     652 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-741831\" already exists" pod="kube-system/kube-controller-manager-newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: I1018 15:07:02.844855     652 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: E1018 15:07:02.864024     652 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-741831\" already exists" pod="kube-system/kube-scheduler-newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: I1018 15:07:02.864071     652 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: E1018 15:07:02.873992     652 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-741831\" already exists" pod="kube-system/etcd-newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: I1018 15:07:02.874039     652 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: E1018 15:07:02.881009     652 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-741831\" already exists" pod="kube-system/kube-apiserver-newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: I1018 15:07:02.984286     652 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: E1018 15:07:02.993935     652 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-741831\" already exists" pod="kube-system/etcd-newest-cni-741831"
	Oct 18 15:07:03 newest-cni-741831 kubelet[652]: I1018 15:07:03.510635     652 apiserver.go:52] "Watching apiserver"
	Oct 18 15:07:03 newest-cni-741831 kubelet[652]: I1018 15:07:03.519357     652 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 15:07:03 newest-cni-741831 kubelet[652]: I1018 15:07:03.531446     652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/338db4d1-7623-42cc-ac47-40e8f34baf31-xtables-lock\") pod \"kindnet-pj5dl\" (UID: \"338db4d1-7623-42cc-ac47-40e8f34baf31\") " pod="kube-system/kindnet-pj5dl"
	Oct 18 15:07:03 newest-cni-741831 kubelet[652]: I1018 15:07:03.531532     652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e790462a-4f99-4636-aa73-b8cf26812e75-lib-modules\") pod \"kube-proxy-cgl2t\" (UID: \"e790462a-4f99-4636-aa73-b8cf26812e75\") " pod="kube-system/kube-proxy-cgl2t"
	Oct 18 15:07:03 newest-cni-741831 kubelet[652]: I1018 15:07:03.531560     652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e790462a-4f99-4636-aa73-b8cf26812e75-xtables-lock\") pod \"kube-proxy-cgl2t\" (UID: \"e790462a-4f99-4636-aa73-b8cf26812e75\") " pod="kube-system/kube-proxy-cgl2t"
	Oct 18 15:07:03 newest-cni-741831 kubelet[652]: I1018 15:07:03.531620     652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/338db4d1-7623-42cc-ac47-40e8f34baf31-cni-cfg\") pod \"kindnet-pj5dl\" (UID: \"338db4d1-7623-42cc-ac47-40e8f34baf31\") " pod="kube-system/kindnet-pj5dl"
	Oct 18 15:07:03 newest-cni-741831 kubelet[652]: I1018 15:07:03.531674     652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/338db4d1-7623-42cc-ac47-40e8f34baf31-lib-modules\") pod \"kindnet-pj5dl\" (UID: \"338db4d1-7623-42cc-ac47-40e8f34baf31\") " pod="kube-system/kindnet-pj5dl"
	Oct 18 15:07:05 newest-cni-741831 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 15:07:05 newest-cni-741831 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 15:07:05 newest-cni-741831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-741831 -n newest-cni-741831
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-741831 -n newest-cni-741831: exit status 2 (410.055937ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-741831 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-dksbs storage-provisioner dashboard-metrics-scraper-6ffb444bf9-g8lc4 kubernetes-dashboard-855c9754f9-dtpqf
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-741831 describe pod coredns-66bc5c9577-dksbs storage-provisioner dashboard-metrics-scraper-6ffb444bf9-g8lc4 kubernetes-dashboard-855c9754f9-dtpqf
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-741831 describe pod coredns-66bc5c9577-dksbs storage-provisioner dashboard-metrics-scraper-6ffb444bf9-g8lc4 kubernetes-dashboard-855c9754f9-dtpqf: exit status 1 (89.78143ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-dksbs" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-g8lc4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-dtpqf" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-741831 describe pod coredns-66bc5c9577-dksbs storage-provisioner dashboard-metrics-scraper-6ffb444bf9-g8lc4 kubernetes-dashboard-855c9754f9-dtpqf: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-741831
helpers_test.go:243: (dbg) docker inspect newest-cni-741831:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50",
	        "Created": "2025-10-18T15:06:24.424165883Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 364193,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T15:06:51.921342323Z",
	            "FinishedAt": "2025-10-18T15:06:50.972031863Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50/hostname",
	        "HostsPath": "/var/lib/docker/containers/80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50/hosts",
	        "LogPath": "/var/lib/docker/containers/80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50/80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50-json.log",
	        "Name": "/newest-cni-741831",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-741831:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-741831",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "80f647182c9589a4b9915f4db6fff1543dcbb89058e24af11d5c199cf89eca50",
	                "LowerDir": "/var/lib/docker/overlay2/ccef2c722d5f80b0292930b40c9275b44275a6f6d4de631a2a924b9d5808916e-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ccef2c722d5f80b0292930b40c9275b44275a6f6d4de631a2a924b9d5808916e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ccef2c722d5f80b0292930b40c9275b44275a6f6d4de631a2a924b9d5808916e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ccef2c722d5f80b0292930b40c9275b44275a6f6d4de631a2a924b9d5808916e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-741831",
	                "Source": "/var/lib/docker/volumes/newest-cni-741831/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-741831",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-741831",
	                "name.minikube.sigs.k8s.io": "newest-cni-741831",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e83de6f0bc9a1243b83bad4d2ce36aa3ac43695774fe2be4b01df50cc6fb6b39",
	            "SandboxKey": "/var/run/docker/netns/e83de6f0bc9a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-741831": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:cd:dd:bd:cd:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f155453f4f173ba69c6ef6bc9d76a496868feaebcb5b5f9ed955e83061073a43",
	                    "EndpointID": "415494b5751db51902f0d37a4ecc4e2eb888c0ae034ba2cc08d9c68245aa1d76",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-741831",
	                        "80f647182c95"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-741831 -n newest-cni-741831
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-741831 -n newest-cni-741831: exit status 2 (385.039355ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-741831 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-741831 logs -n 25: (1.141524398s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-948537                                                                                                                                                                                                                     │ old-k8s-version-948537       │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ delete  │ -p disable-driver-mounts-677415                                                                                                                                                                                                               │ disable-driver-mounts-677415 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:05 UTC │
	│ start   │ -p default-k8s-diff-port-489104 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:05 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p cert-expiration-327346 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-327346       │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ delete  │ -p cert-expiration-327346                                                                                                                                                                                                                     │ cert-expiration-327346       │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p newest-cni-741831 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-775590 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ stop    │ -p embed-certs-775590 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ image   │ no-preload-165275 image list --format=json                                                                                                                                                                                                    │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ pause   │ -p no-preload-165275 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-489104 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-775590 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ delete  │ -p no-preload-165275                                                                                                                                                                                                                          │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p embed-certs-775590 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-489104 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ delete  │ -p no-preload-165275                                                                                                                                                                                                                          │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p auto-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-741831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ stop    │ -p newest-cni-741831 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-741831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p newest-cni-741831 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:07 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-489104 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p default-k8s-diff-port-489104 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ image   │ newest-cni-741831 image list --format=json                                                                                                                                                                                                    │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ pause   │ -p newest-cni-741831 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:06:59
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:06:59.872233  366690 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:06:59.872747  366690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:06:59.872757  366690 out.go:374] Setting ErrFile to fd 2...
	I1018 15:06:59.872764  366690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:06:59.873131  366690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:06:59.873705  366690 out.go:368] Setting JSON to false
	I1018 15:06:59.875300  366690 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10171,"bootTime":1760789849,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:06:59.875429  366690 start.go:141] virtualization: kvm guest
	I1018 15:06:59.877582  366690 out.go:179] * [default-k8s-diff-port-489104] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:06:59.879381  366690 notify.go:220] Checking for updates...
	I1018 15:06:59.879408  366690 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:06:59.883115  366690 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:06:59.884420  366690 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:06:59.886657  366690 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 15:06:59.887948  366690 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:06:59.889165  366690 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:06:59.891121  366690 config.go:182] Loaded profile config "default-k8s-diff-port-489104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:06:59.891809  366690 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:06:59.924074  366690 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:06:59.924193  366690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:07:00.014603  366690 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-18 15:06:59.998291777 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:07:00.014822  366690 docker.go:318] overlay module found
	I1018 15:07:00.017160  366690 out.go:179] * Using the docker driver based on existing profile
	I1018 15:07:00.019336  366690 start.go:305] selected driver: docker
	I1018 15:07:00.019354  366690 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-489104 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-489104 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:07:00.019515  366690 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:07:00.020338  366690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:07:00.091525  366690 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-18 15:07:00.081133385 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:07:00.091959  366690 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:07:00.091999  366690 cni.go:84] Creating CNI manager for ""
	I1018 15:07:00.092065  366690 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:07:00.092123  366690 start.go:349] cluster config:
	{Name:default-k8s-diff-port-489104 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-489104 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:07:00.094372  366690 out.go:179] * Starting "default-k8s-diff-port-489104" primary control-plane node in "default-k8s-diff-port-489104" cluster
	I1018 15:07:00.095571  366690 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:07:00.096725  366690 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:07:00.097794  366690 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:07:00.097861  366690 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 15:07:00.097855  366690 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 15:07:00.097874  366690 cache.go:58] Caching tarball of preloaded images
	I1018 15:07:00.097989  366690 preload.go:233] Found /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 15:07:00.098004  366690 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 15:07:00.098137  366690 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/default-k8s-diff-port-489104/config.json ...
	I1018 15:07:00.121397  366690 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:07:00.121419  366690 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 15:07:00.121435  366690 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:07:00.121463  366690 start.go:360] acquireMachinesLock for default-k8s-diff-port-489104: {Name:mkc98cd1d4086725a8dd8ef11198f9481b2bbd15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:07:00.121546  366690 start.go:364] duration metric: took 43.139µs to acquireMachinesLock for "default-k8s-diff-port-489104"
	I1018 15:07:00.121569  366690 start.go:96] Skipping create...Using existing machine configuration
	I1018 15:07:00.121581  366690 fix.go:54] fixHost starting: 
	I1018 15:07:00.121808  366690 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-489104 --format={{.State.Status}}
	I1018 15:07:00.140247  366690 fix.go:112] recreateIfNeeded on default-k8s-diff-port-489104: state=Stopped err=<nil>
	W1018 15:07:00.140281  366690 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 15:06:56.617433  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	W1018 15:06:59.118343  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	W1018 15:07:01.141447  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	I1018 15:06:59.486372  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 15:06:59.486406  363917 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 15:06:59.486480  363917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:59.519782  363917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:59.521972  363917 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 15:06:59.522075  363917 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 15:06:59.522184  363917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-741831
	I1018 15:06:59.525273  363917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:59.556357  363917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/newest-cni-741831/id_rsa Username:docker}
	I1018 15:06:59.677169  363917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:06:59.681275  363917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 15:06:59.711883  363917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 15:06:59.715407  363917 api_server.go:52] waiting for apiserver process to appear ...
	I1018 15:06:59.715468  363917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:06:59.729955  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 15:06:59.729985  363917 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 15:06:59.761621  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 15:06:59.761703  363917 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 15:06:59.789977  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 15:06:59.790003  363917 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 15:06:59.810880  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 15:06:59.810946  363917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 15:06:59.838233  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 15:06:59.838260  363917 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 15:06:59.865973  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 15:06:59.866004  363917 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 15:06:59.884891  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 15:06:59.884937  363917 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 15:06:59.909460  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 15:06:59.909841  363917 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 15:06:59.930643  363917 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 15:06:59.930671  363917 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 15:06:59.953230  363917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 15:07:03.409278  363917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.697304993s)
	I1018 15:07:03.409391  363917 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.693900115s)
	I1018 15:07:03.409425  363917 api_server.go:72] duration metric: took 3.973585291s to wait for apiserver process to appear ...
	I1018 15:07:03.409433  363917 api_server.go:88] waiting for apiserver healthz status ...
	I1018 15:07:03.409457  363917 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 15:07:03.409588  363917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.456312979s)
	I1018 15:07:03.409937  363917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.728606372s)
	I1018 15:07:03.411215  363917 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-741831 addons enable metrics-server
	
	I1018 15:07:03.415535  363917 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 15:07:03.415567  363917 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 15:07:03.435041  363917 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 15:07:03.436855  363917 addons.go:514] duration metric: took 4.000782457s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 15:07:03.909626  363917 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 15:07:03.916379  363917 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1018 15:07:03.918535  363917 api_server.go:141] control plane version: v1.34.1
	I1018 15:07:03.918568  363917 api_server.go:131] duration metric: took 509.128473ms to wait for apiserver health ...
	I1018 15:07:03.918578  363917 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:07:03.924287  363917 system_pods.go:59] 8 kube-system pods found
	I1018 15:07:03.924333  363917 system_pods.go:61] "coredns-66bc5c9577-dksbs" [4afc2ce3-9388-42d7-a40e-f7fa9040d77f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 15:07:03.924372  363917 system_pods.go:61] "etcd-newest-cni-741831" [12f950c8-4dfa-4ccc-83d6-5610731545be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 15:07:03.924387  363917 system_pods.go:61] "kindnet-pj5dl" [338db4d1-7623-42cc-ac47-40e8f34baf31] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 15:07:03.924397  363917 system_pods.go:61] "kube-apiserver-newest-cni-741831" [046fc171-edaf-4ada-b09f-a1fc0d2baeee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 15:07:03.924406  363917 system_pods.go:61] "kube-controller-manager-newest-cni-741831" [2a2cd49d-4869-4d98-b7a3-2cf8ffacb083] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 15:07:03.924586  363917 system_pods.go:61] "kube-proxy-cgl2t" [e790462a-4f99-4636-aa73-b8cf26812e75] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 15:07:03.924603  363917 system_pods.go:61] "kube-scheduler-newest-cni-741831" [4c40112d-9d56-49a9-9442-be510c5aaf5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 15:07:03.924639  363917 system_pods.go:61] "storage-provisioner" [29182b74-a02c-4f22-9317-75f93297a124] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 15:07:03.924654  363917 system_pods.go:74] duration metric: took 6.066568ms to wait for pod list to return data ...
	I1018 15:07:03.924668  363917 default_sa.go:34] waiting for default service account to be created ...
	I1018 15:07:03.928517  363917 default_sa.go:45] found service account: "default"
	I1018 15:07:03.928549  363917 default_sa.go:55] duration metric: took 3.868642ms for default service account to be created ...
	I1018 15:07:03.928590  363917 kubeadm.go:586] duration metric: took 4.492749994s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 15:07:03.928626  363917 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:07:03.932060  363917 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 15:07:03.932093  363917 node_conditions.go:123] node cpu capacity is 8
	I1018 15:07:03.932107  363917 node_conditions.go:105] duration metric: took 3.47516ms to run NodePressure ...
	I1018 15:07:03.932124  363917 start.go:241] waiting for startup goroutines ...
	I1018 15:07:03.932134  363917 start.go:246] waiting for cluster config update ...
	I1018 15:07:03.932149  363917 start.go:255] writing updated cluster config ...
	I1018 15:07:03.932472  363917 ssh_runner.go:195] Run: rm -f paused
	I1018 15:07:03.996125  363917 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:07:03.998967  363917 out.go:179] * Done! kubectl is now configured to use "newest-cni-741831" cluster and "default" namespace by default
	I1018 15:07:00.012545  359679 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001279533s
	I1018 15:07:00.016373  359679 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 15:07:00.016571  359679 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 15:07:00.016689  359679 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 15:07:00.016805  359679 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 15:07:02.260943  359679 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.243926997s
	I1018 15:07:03.372090  359679 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.355765694s
	I1018 15:07:00.142085  366690 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-489104" ...
	I1018 15:07:00.142175  366690 cli_runner.go:164] Run: docker start default-k8s-diff-port-489104
	I1018 15:07:00.447143  366690 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-489104 --format={{.State.Status}}
	I1018 15:07:00.482063  366690 kic.go:430] container "default-k8s-diff-port-489104" state is running.
	I1018 15:07:00.483352  366690 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-489104
	I1018 15:07:00.512041  366690 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/default-k8s-diff-port-489104/config.json ...
	I1018 15:07:00.512470  366690 machine.go:93] provisionDockerMachine start ...
	I1018 15:07:00.512553  366690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-489104
	I1018 15:07:00.543650  366690 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:00.544247  366690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1018 15:07:00.544300  366690 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:07:00.545926  366690 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43572->127.0.0.1:33103: read: connection reset by peer
	I1018 15:07:03.707993  366690 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-489104
	
	I1018 15:07:03.708037  366690 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-489104"
	I1018 15:07:03.708108  366690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-489104
	I1018 15:07:03.732219  366690 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:03.732520  366690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1018 15:07:03.732540  366690 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-489104 && echo "default-k8s-diff-port-489104" | sudo tee /etc/hostname
	I1018 15:07:03.907349  366690 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-489104
	
	I1018 15:07:03.907433  366690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-489104
	I1018 15:07:03.933124  366690 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:03.933410  366690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1018 15:07:03.933441  366690 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-489104' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-489104/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-489104' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:07:04.091816  366690 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:07:04.091984  366690 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 15:07:04.092115  366690 ubuntu.go:190] setting up certificates
	I1018 15:07:04.092146  366690 provision.go:84] configureAuth start
	I1018 15:07:04.092230  366690 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-489104
	I1018 15:07:04.119685  366690 provision.go:143] copyHostCerts
	I1018 15:07:04.119970  366690 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem, removing ...
	I1018 15:07:04.120006  366690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:07:04.120084  366690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 15:07:04.120202  366690 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem, removing ...
	I1018 15:07:04.120212  366690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:07:04.120260  366690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 15:07:04.120353  366690 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem, removing ...
	I1018 15:07:04.120360  366690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:07:04.120400  366690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 15:07:04.120479  366690 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-489104 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-489104 localhost minikube]
	I1018 15:07:04.270879  366690 provision.go:177] copyRemoteCerts
	I1018 15:07:04.270984  366690 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:07:04.271037  366690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-489104
	I1018 15:07:04.295683  366690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/default-k8s-diff-port-489104/id_rsa Username:docker}
	I1018 15:07:04.410594  366690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:07:04.442767  366690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 15:07:04.472765  366690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 15:07:04.501475  366690 provision.go:87] duration metric: took 409.310201ms to configureAuth
	I1018 15:07:04.501502  366690 ubuntu.go:206] setting minikube options for container-runtime
	I1018 15:07:04.501661  366690 config.go:182] Loaded profile config "default-k8s-diff-port-489104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:04.501744  366690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-489104
	I1018 15:07:04.527504  366690 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:04.528164  366690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1018 15:07:04.528224  366690 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:07:06.018972  359679 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002564854s
	I1018 15:07:06.033794  359679 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 15:07:06.046683  359679 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 15:07:06.056624  359679 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 15:07:06.056843  359679 kubeadm.go:318] [mark-control-plane] Marking the node auto-034446 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 15:07:06.066286  359679 kubeadm.go:318] [bootstrap-token] Using token: jwy8id.3lefdrff37gw7l4a
	W1018 15:07:03.619770  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	W1018 15:07:06.117215  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	I1018 15:07:05.647446  366690 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:07:05.647475  366690 machine.go:96] duration metric: took 5.134975185s to provisionDockerMachine
	I1018 15:07:05.647491  366690 start.go:293] postStartSetup for "default-k8s-diff-port-489104" (driver="docker")
	I1018 15:07:05.647504  366690 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:07:05.647567  366690 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:07:05.647612  366690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-489104
	I1018 15:07:05.678134  366690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/default-k8s-diff-port-489104/id_rsa Username:docker}
	I1018 15:07:05.780337  366690 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:07:05.785130  366690 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 15:07:05.785167  366690 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 15:07:05.785184  366690 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/addons for local assets ...
	I1018 15:07:05.785241  366690 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/files for local assets ...
	I1018 15:07:05.785342  366690 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem -> 931872.pem in /etc/ssl/certs
	I1018 15:07:05.785484  366690 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:07:05.794261  366690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:07:05.817751  366690 start.go:296] duration metric: took 170.242106ms for postStartSetup
	I1018 15:07:05.817861  366690 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 15:07:05.817901  366690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-489104
	I1018 15:07:05.837019  366690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/default-k8s-diff-port-489104/id_rsa Username:docker}
	I1018 15:07:05.932511  366690 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 15:07:05.937507  366690 fix.go:56] duration metric: took 5.815918569s for fixHost
	I1018 15:07:05.937542  366690 start.go:83] releasing machines lock for "default-k8s-diff-port-489104", held for 5.815983966s
	I1018 15:07:05.937615  366690 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-489104
	I1018 15:07:05.955204  366690 ssh_runner.go:195] Run: cat /version.json
	I1018 15:07:05.955261  366690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-489104
	I1018 15:07:05.955298  366690 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:07:05.955380  366690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-489104
	I1018 15:07:05.982313  366690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/default-k8s-diff-port-489104/id_rsa Username:docker}
	I1018 15:07:05.982383  366690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/default-k8s-diff-port-489104/id_rsa Username:docker}
	I1018 15:07:06.089046  366690 ssh_runner.go:195] Run: systemctl --version
	I1018 15:07:06.147967  366690 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:07:06.185027  366690 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:07:06.190229  366690 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:07:06.190293  366690 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:07:06.198711  366690 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 15:07:06.198733  366690 start.go:495] detecting cgroup driver to use...
	I1018 15:07:06.198762  366690 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 15:07:06.198812  366690 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:07:06.214530  366690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:07:06.228739  366690 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:07:06.228811  366690 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:07:06.243957  366690 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:07:06.256766  366690 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:07:06.356111  366690 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:07:06.461905  366690 docker.go:234] disabling docker service ...
	I1018 15:07:06.462014  366690 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:07:06.484105  366690 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:07:06.496450  366690 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:07:06.607308  366690 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:07:06.732201  366690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:07:06.753705  366690 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:07:06.773287  366690 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 15:07:06.773347  366690 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:06.784590  366690 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 15:07:06.784672  366690 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:06.794983  366690 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:06.805880  366690 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:06.816605  366690 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:07:06.826662  366690 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:06.838211  366690 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:06.849319  366690 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:06.862183  366690 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:07:06.871668  366690 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:07:06.879861  366690 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:07:06.975638  366690 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:07:07.103593  366690 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:07:07.103669  366690 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:07:07.108726  366690 start.go:563] Will wait 60s for crictl version
	I1018 15:07:07.108847  366690 ssh_runner.go:195] Run: which crictl
	I1018 15:07:07.113262  366690 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 15:07:07.143260  366690 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 15:07:07.143343  366690 ssh_runner.go:195] Run: crio --version
	I1018 15:07:07.182398  366690 ssh_runner.go:195] Run: crio --version
	I1018 15:07:07.216398  366690 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 15:07:06.067905  359679 out.go:252]   - Configuring RBAC rules ...
	I1018 15:07:06.068101  359679 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 15:07:06.071184  359679 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 15:07:06.077212  359679 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 15:07:06.081025  359679 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 15:07:06.083738  359679 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 15:07:06.086681  359679 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 15:07:06.426954  359679 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 15:07:06.845360  359679 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 15:07:07.426660  359679 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 15:07:07.427654  359679 kubeadm.go:318] 
	I1018 15:07:07.427753  359679 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 15:07:07.427764  359679 kubeadm.go:318] 
	I1018 15:07:07.427885  359679 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 15:07:07.427908  359679 kubeadm.go:318] 
	I1018 15:07:07.427959  359679 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 15:07:07.428050  359679 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 15:07:07.428132  359679 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 15:07:07.428143  359679 kubeadm.go:318] 
	I1018 15:07:07.428219  359679 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 15:07:07.428227  359679 kubeadm.go:318] 
	I1018 15:07:07.428288  359679 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 15:07:07.428297  359679 kubeadm.go:318] 
	I1018 15:07:07.428352  359679 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 15:07:07.428453  359679 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 15:07:07.428541  359679 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 15:07:07.428586  359679 kubeadm.go:318] 
	I1018 15:07:07.428723  359679 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 15:07:07.428837  359679 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 15:07:07.428850  359679 kubeadm.go:318] 
	I1018 15:07:07.428991  359679 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token jwy8id.3lefdrff37gw7l4a \
	I1018 15:07:07.429138  359679 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9845daa2461a34dd973607f8353b114051f1b03075ff9500a74fb21865e04329 \
	I1018 15:07:07.429173  359679 kubeadm.go:318] 	--control-plane 
	I1018 15:07:07.429182  359679 kubeadm.go:318] 
	I1018 15:07:07.429300  359679 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 15:07:07.429310  359679 kubeadm.go:318] 
	I1018 15:07:07.429451  359679 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token jwy8id.3lefdrff37gw7l4a \
	I1018 15:07:07.429623  359679 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9845daa2461a34dd973607f8353b114051f1b03075ff9500a74fb21865e04329 
	I1018 15:07:07.433300  359679 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 15:07:07.433469  359679 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 15:07:07.433501  359679 cni.go:84] Creating CNI manager for ""
	I1018 15:07:07.433519  359679 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:07:07.435441  359679 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 15:07:07.217594  366690 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-489104 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:07:07.235967  366690 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 15:07:07.240269  366690 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:07:07.250834  366690 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-489104 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-489104 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 15:07:07.250989  366690 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:07:07.251046  366690 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:07:07.286830  366690 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:07:07.286855  366690 crio.go:433] Images already preloaded, skipping extraction
	I1018 15:07:07.286931  366690 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:07:07.314523  366690 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:07:07.314545  366690 cache_images.go:85] Images are preloaded, skipping loading
	I1018 15:07:07.314554  366690 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1018 15:07:07.314931  366690 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-489104 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-489104 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 15:07:07.315218  366690 ssh_runner.go:195] Run: crio config
	I1018 15:07:07.366687  366690 cni.go:84] Creating CNI manager for ""
	I1018 15:07:07.366717  366690 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 15:07:07.366740  366690 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 15:07:07.366769  366690 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-489104 NodeName:default-k8s-diff-port-489104 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 15:07:07.366972  366690 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-489104"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 15:07:07.367055  366690 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 15:07:07.375650  366690 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 15:07:07.375721  366690 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 15:07:07.385199  366690 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1018 15:07:07.398565  366690 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 15:07:07.414314  366690 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1018 15:07:07.431123  366690 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 15:07:07.436015  366690 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:07:07.449390  366690 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:07:07.551665  366690 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:07:07.575972  366690 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/default-k8s-diff-port-489104 for IP: 192.168.103.2
	I1018 15:07:07.575999  366690 certs.go:195] generating shared ca certs ...
	I1018 15:07:07.576051  366690 certs.go:227] acquiring lock for ca certs: {Name:mk6d3b4e44856f498157352ffe8bb89d2c7a3998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:07.576241  366690 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key
	I1018 15:07:07.576305  366690 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key
	I1018 15:07:07.576322  366690 certs.go:257] generating profile certs ...
	I1018 15:07:07.576424  366690 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/default-k8s-diff-port-489104/client.key
	I1018 15:07:07.576494  366690 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/default-k8s-diff-port-489104/apiserver.key.e6bfa6f6
	I1018 15:07:07.576547  366690 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/default-k8s-diff-port-489104/proxy-client.key
	I1018 15:07:07.576678  366690 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem (1338 bytes)
	W1018 15:07:07.576720  366690 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187_empty.pem, impossibly tiny 0 bytes
	I1018 15:07:07.576730  366690 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 15:07:07.576758  366690 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem (1082 bytes)
	I1018 15:07:07.576789  366690 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem (1123 bytes)
	I1018 15:07:07.576822  366690 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem (1675 bytes)
	I1018 15:07:07.576886  366690 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:07:07.577736  366690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 15:07:07.600310  366690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 15:07:07.627505  366690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 15:07:07.653314  366690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 15:07:07.688043  366690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/default-k8s-diff-port-489104/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 15:07:07.715902  366690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/default-k8s-diff-port-489104/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 15:07:07.738805  366690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/default-k8s-diff-port-489104/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 15:07:07.761607  366690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/default-k8s-diff-port-489104/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 15:07:07.797323  366690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /usr/share/ca-certificates/931872.pem (1708 bytes)
	I1018 15:07:07.843371  366690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 15:07:07.869413  366690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem --> /usr/share/ca-certificates/93187.pem (1338 bytes)
	I1018 15:07:07.893164  366690 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 15:07:07.908403  366690 ssh_runner.go:195] Run: openssl version
	I1018 15:07:07.916698  366690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/931872.pem && ln -fs /usr/share/ca-certificates/931872.pem /etc/ssl/certs/931872.pem"
	I1018 15:07:07.927626  366690 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/931872.pem
	I1018 15:07:07.932395  366690 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 14:26 /usr/share/ca-certificates/931872.pem
	I1018 15:07:07.932602  366690 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/931872.pem
	I1018 15:07:07.972904  366690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/931872.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 15:07:07.982801  366690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 15:07:07.992160  366690 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:07:07.996531  366690 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:07:07.996594  366690 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:07:08.036497  366690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 15:07:08.046220  366690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93187.pem && ln -fs /usr/share/ca-certificates/93187.pem /etc/ssl/certs/93187.pem"
	I1018 15:07:08.055872  366690 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93187.pem
	I1018 15:07:08.060627  366690 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 14:26 /usr/share/ca-certificates/93187.pem
	I1018 15:07:08.060686  366690 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93187.pem
	I1018 15:07:08.099628  366690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93187.pem /etc/ssl/certs/51391683.0"
	I1018 15:07:08.109553  366690 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 15:07:08.114822  366690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 15:07:08.166165  366690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 15:07:08.217413  366690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 15:07:08.280516  366690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 15:07:08.340251  366690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 15:07:08.401663  366690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 15:07:08.448140  366690 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-489104 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-489104 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:07:08.448262  366690 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 15:07:08.448339  366690 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 15:07:08.488961  366690 cri.go:89] found id: "7fb5589151f3a78025f09ce1b546891fb02d25162971c57434743de3e24cbe9f"
	I1018 15:07:08.488988  366690 cri.go:89] found id: "ce9720cd32591b6942daf642e28e8696920a5b3fcb4f8eddcd689c9ef3054c1e"
	I1018 15:07:08.488994  366690 cri.go:89] found id: "1e308c368e373ccff9c4504f9e6503c09e4a7d1e0200e60472eaf38378135b96"
	I1018 15:07:08.488999  366690 cri.go:89] found id: "2358e366cd9757f3067562185021f0051cae924e07f221015b53a392bf5f90b2"
	I1018 15:07:08.489003  366690 cri.go:89] found id: ""
	I1018 15:07:08.489049  366690 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 15:07:08.507509  366690 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:07:08Z" level=error msg="open /run/runc: no such file or directory"
	I1018 15:07:08.507605  366690 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 15:07:08.520727  366690 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 15:07:08.520752  366690 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 15:07:08.520866  366690 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 15:07:08.532838  366690 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 15:07:08.533734  366690 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-489104" does not appear in /home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:07:08.534565  366690 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-89690/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-489104" cluster setting kubeconfig missing "default-k8s-diff-port-489104" context setting]
	I1018 15:07:08.535586  366690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/kubeconfig: {Name:mkdc6fbc9be8e57447490b30dd5bb0086611876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:08.537809  366690 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 15:07:08.547635  366690 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1018 15:07:08.547670  366690 kubeadm.go:601] duration metric: took 26.911317ms to restartPrimaryControlPlane
	I1018 15:07:08.547681  366690 kubeadm.go:402] duration metric: took 99.553601ms to StartCluster
	I1018 15:07:08.547700  366690 settings.go:142] acquiring lock: {Name:mk9d9d72167811b3554435e137ed17137bf54a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:08.547769  366690 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:07:08.549460  366690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/kubeconfig: {Name:mkdc6fbc9be8e57447490b30dd5bb0086611876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:08.549695  366690 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:07:08.549775  366690 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 15:07:08.549883  366690 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-489104"
	I1018 15:07:08.549902  366690 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-489104"
	W1018 15:07:08.549928  366690 addons.go:247] addon storage-provisioner should already be in state true
	I1018 15:07:08.549953  366690 config.go:182] Loaded profile config "default-k8s-diff-port-489104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:08.549968  366690 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-489104"
	I1018 15:07:08.549982  366690 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-489104"
	I1018 15:07:08.549985  366690 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-489104"
	I1018 15:07:08.549959  366690 host.go:66] Checking if "default-k8s-diff-port-489104" exists ...
	I1018 15:07:08.550013  366690 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-489104"
	W1018 15:07:08.550022  366690 addons.go:247] addon dashboard should already be in state true
	I1018 15:07:08.550066  366690 host.go:66] Checking if "default-k8s-diff-port-489104" exists ...
	I1018 15:07:08.550322  366690 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-489104 --format={{.State.Status}}
	I1018 15:07:08.550522  366690 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-489104 --format={{.State.Status}}
	I1018 15:07:08.550544  366690 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-489104 --format={{.State.Status}}
	I1018 15:07:08.551543  366690 out.go:179] * Verifying Kubernetes components...
	I1018 15:07:08.553412  366690 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:07:08.581012  366690 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-489104"
	W1018 15:07:08.581040  366690 addons.go:247] addon default-storageclass should already be in state true
	I1018 15:07:08.581074  366690 host.go:66] Checking if "default-k8s-diff-port-489104" exists ...
	I1018 15:07:08.581624  366690 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-489104 --format={{.State.Status}}
	I1018 15:07:08.581896  366690 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 15:07:08.581903  366690 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 15:07:08.583223  366690 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 15:07:08.583280  366690 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 15:07:08.583305  366690 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 15:07:08.583364  366690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-489104
	
	
	==> CRI-O <==
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.820144201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.826640816Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0c341a41-6d94-4fa9-bf70-b4ab8dd8cdcd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.827246446Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=08b69e25-e169-4640-b3cd-98084a3bd640 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.829862192Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.830957171Z" level=info msg="Ran pod sandbox 9a6e297a5a80cf47c721fd3a14bbf8813ab6a669f3755f12d7f24eed7ea41be7 with infra container: kube-system/kube-proxy-cgl2t/POD" id=0c341a41-6d94-4fa9-bf70-b4ab8dd8cdcd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.831120539Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.832466016Z" level=info msg="Ran pod sandbox 4dc6db69e365c4fa1d61fc14af741e61019dca09e04f8a9f0e8721d5b9840d6c with infra container: kube-system/kindnet-pj5dl/POD" id=08b69e25-e169-4640-b3cd-98084a3bd640 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.832700654Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=08014cce-24e6-4570-a977-d817926ae49a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.833844034Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=00f3d160-44ed-4613-854b-35690ec45bcc name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.834291858Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=dee53644-5fac-41f4-bf46-f8e38297e413 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.835841638Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0cfbef94-150f-4fd9-a08f-0c066725591c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.835932306Z" level=info msg="Creating container: kube-system/kube-proxy-cgl2t/kube-proxy" id=d0ac5192-4ac7-43fe-ae77-874166b77b9c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.836206819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.83722113Z" level=info msg="Creating container: kube-system/kindnet-pj5dl/kindnet-cni" id=8faf631e-cf92-440e-bf94-0971826c78b1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.83935718Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.842656181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.843328001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.844516322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.845311616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.877571952Z" level=info msg="Created container bd9c45aa764304c606046593f542a1af165aa1d2c8384b462aa7dcb38f337591: kube-system/kindnet-pj5dl/kindnet-cni" id=8faf631e-cf92-440e-bf94-0971826c78b1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.87828104Z" level=info msg="Starting container: bd9c45aa764304c606046593f542a1af165aa1d2c8384b462aa7dcb38f337591" id=37fe3bc1-d042-4ef5-b2dc-c507bef4ecb7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.881179049Z" level=info msg="Started container" PID=1016 containerID=bd9c45aa764304c606046593f542a1af165aa1d2c8384b462aa7dcb38f337591 description=kube-system/kindnet-pj5dl/kindnet-cni id=37fe3bc1-d042-4ef5-b2dc-c507bef4ecb7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4dc6db69e365c4fa1d61fc14af741e61019dca09e04f8a9f0e8721d5b9840d6c
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.882275283Z" level=info msg="Created container ad5bdf93a6fd51d0e0a0d2091f1b59559b68981cabe5ac45c5a8502f33c102ad: kube-system/kube-proxy-cgl2t/kube-proxy" id=d0ac5192-4ac7-43fe-ae77-874166b77b9c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.88300161Z" level=info msg="Starting container: ad5bdf93a6fd51d0e0a0d2091f1b59559b68981cabe5ac45c5a8502f33c102ad" id=bd1cb3db-7aba-460c-afb7-3bcb3a8252b2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:07:03 newest-cni-741831 crio[509]: time="2025-10-18T15:07:03.886617017Z" level=info msg="Started container" PID=1017 containerID=ad5bdf93a6fd51d0e0a0d2091f1b59559b68981cabe5ac45c5a8502f33c102ad description=kube-system/kube-proxy-cgl2t/kube-proxy id=bd1cb3db-7aba-460c-afb7-3bcb3a8252b2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9a6e297a5a80cf47c721fd3a14bbf8813ab6a669f3755f12d7f24eed7ea41be7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bd9c45aa76430       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   4dc6db69e365c       kindnet-pj5dl                               kube-system
	ad5bdf93a6fd5       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   9a6e297a5a80c       kube-proxy-cgl2t                            kube-system
	ad1e0d015fdf1       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   10 seconds ago      Running             kube-controller-manager   1                   469c20253226f       kube-controller-manager-newest-cni-741831   kube-system
	3aaf72a8fab30       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   10 seconds ago      Running             kube-apiserver            1                   724f40770a8d3       kube-apiserver-newest-cni-741831            kube-system
	474f32f68077e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   10 seconds ago      Running             kube-scheduler            1                   7be288ac37e9d       kube-scheduler-newest-cni-741831            kube-system
	dd0e3d8eec101       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   10 seconds ago      Running             etcd                      1                   3965e7e9001b3       etcd-newest-cni-741831                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-741831
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-741831
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=newest-cni-741831
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_06_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:06:37 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-741831
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:07:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:07:02 +0000   Sat, 18 Oct 2025 15:06:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:07:02 +0000   Sat, 18 Oct 2025 15:06:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:07:02 +0000   Sat, 18 Oct 2025 15:06:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 15:07:02 +0000   Sat, 18 Oct 2025 15:06:35 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-741831
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                93172a2b-ef45-4eea-9f95-aa90d7a726bd
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-741831                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-pj5dl                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-newest-cni-741831             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-newest-cni-741831    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-cgl2t                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-newest-cni-741831             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node newest-cni-741831 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node newest-cni-741831 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node newest-cni-741831 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node newest-cni-741831 event: Registered Node newest-cni-741831 in Controller
	  Normal  Starting                 12s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node newest-cni-741831 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node newest-cni-741831 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x8 over 12s)  kubelet          Node newest-cni-741831 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-741831 event: Registered Node newest-cni-741831 in Controller
	
	
	==> dmesg <==
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [dd0e3d8eec101af0cdf8be9c93b2425a97577f086a02a3133b60fa3686e3c82a] <==
	{"level":"warn","ts":"2025-10-18T15:07:01.718368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.730685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.738416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.746751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.757899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.765577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.776417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.787524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.798126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.809494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.824546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.836016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.845020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.855602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.863254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.870315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.877692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.887198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.895967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.906591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.916496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.935841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.945436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:01.952884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:02.032475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58920","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:07:10 up  2:49,  0 user,  load average: 5.61, 3.45, 2.19
	Linux newest-cni-741831 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bd9c45aa764304c606046593f542a1af165aa1d2c8384b462aa7dcb38f337591] <==
	I1018 15:07:04.112475       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 15:07:04.130196       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1018 15:07:04.131411       1 main.go:148] setting mtu 1500 for CNI 
	I1018 15:07:04.131456       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 15:07:04.131485       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T15:07:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 15:07:04.340651       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 15:07:04.340687       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 15:07:04.340705       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 15:07:04.340880       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 15:07:04.740814       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 15:07:04.742380       1 metrics.go:72] Registering metrics
	I1018 15:07:04.742787       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [3aaf72a8fab306d03fd39b588902cacfd12938e06467694af5c1db7254c80b0d] <==
	I1018 15:07:02.673144       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 15:07:02.677408       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 15:07:02.673189       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 15:07:02.678051       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 15:07:02.678120       1 aggregator.go:171] initial CRD sync complete...
	I1018 15:07:02.678149       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 15:07:02.678173       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 15:07:02.678195       1 cache.go:39] Caches are synced for autoregister controller
	I1018 15:07:02.690366       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 15:07:02.728328       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:07:02.742062       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 15:07:02.756805       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 15:07:02.756855       1 policy_source.go:240] refreshing policies
	I1018 15:07:02.850052       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 15:07:03.079412       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 15:07:03.124125       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 15:07:03.146654       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:07:03.154291       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:07:03.169445       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 15:07:03.238898       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.142.159"}
	I1018 15:07:03.254399       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.113.99"}
	I1018 15:07:03.574597       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:07:06.464210       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 15:07:06.514239       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 15:07:06.564732       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ad1e0d015fdf1368ded1825253a96c3951279196aa718df2645c2024b26f3fc1] <==
	I1018 15:07:06.037996       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 15:07:06.043598       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 15:07:06.060990       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 15:07:06.061013       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 15:07:06.061075       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 15:07:06.061901       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 15:07:06.062100       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 15:07:06.063280       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 15:07:06.063313       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 15:07:06.065642       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 15:07:06.065669       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 15:07:06.067968       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 15:07:06.067987       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 15:07:06.070278       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 15:07:06.073582       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 15:07:06.074688       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 15:07:06.075745       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 15:07:06.079161       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 15:07:06.082488       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 15:07:06.082585       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 15:07:06.085965       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 15:07:06.089172       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 15:07:06.089181       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 15:07:06.093406       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 15:07:06.093597       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [ad5bdf93a6fd51d0e0a0d2091f1b59559b68981cabe5ac45c5a8502f33c102ad] <==
	I1018 15:07:03.940705       1 server_linux.go:53] "Using iptables proxy"
	I1018 15:07:04.002471       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 15:07:04.103644       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 15:07:04.103691       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1018 15:07:04.103786       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 15:07:04.133268       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 15:07:04.133379       1 server_linux.go:132] "Using iptables Proxier"
	I1018 15:07:04.140857       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 15:07:04.141452       1 server.go:527] "Version info" version="v1.34.1"
	I1018 15:07:04.143116       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:07:04.148512       1 config.go:200] "Starting service config controller"
	I1018 15:07:04.148987       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 15:07:04.149037       1 config.go:309] "Starting node config controller"
	I1018 15:07:04.149499       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 15:07:04.149539       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 15:07:04.149162       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 15:07:04.149689       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 15:07:04.149143       1 config.go:106] "Starting endpoint slice config controller"
	I1018 15:07:04.149756       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 15:07:04.249214       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 15:07:04.250792       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 15:07:04.250816       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [474f32f68077ed4928d09f0824bece6ea546498605afd71ed210110e677a4f30] <==
	I1018 15:07:00.834244       1 serving.go:386] Generated self-signed cert in-memory
	W1018 15:07:02.647789       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 15:07:02.647836       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 15:07:02.647849       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 15:07:02.647866       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 15:07:02.698149       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 15:07:02.698184       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:07:02.701807       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:07:02.701895       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:07:02.703185       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 15:07:02.703573       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 15:07:02.802853       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 15:07:01 newest-cni-741831 kubelet[652]: E1018 15:07:01.794191     652 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-741831\" not found" node="newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: I1018 15:07:02.821317     652 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: I1018 15:07:02.832448     652 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: I1018 15:07:02.835463     652 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: I1018 15:07:02.835559     652 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: I1018 15:07:02.838077     652 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: E1018 15:07:02.844813     652 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-741831\" already exists" pod="kube-system/kube-controller-manager-newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: I1018 15:07:02.844855     652 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: E1018 15:07:02.864024     652 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-741831\" already exists" pod="kube-system/kube-scheduler-newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: I1018 15:07:02.864071     652 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: E1018 15:07:02.873992     652 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-741831\" already exists" pod="kube-system/etcd-newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: I1018 15:07:02.874039     652 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: E1018 15:07:02.881009     652 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-741831\" already exists" pod="kube-system/kube-apiserver-newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: I1018 15:07:02.984286     652 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-741831"
	Oct 18 15:07:02 newest-cni-741831 kubelet[652]: E1018 15:07:02.993935     652 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-741831\" already exists" pod="kube-system/etcd-newest-cni-741831"
	Oct 18 15:07:03 newest-cni-741831 kubelet[652]: I1018 15:07:03.510635     652 apiserver.go:52] "Watching apiserver"
	Oct 18 15:07:03 newest-cni-741831 kubelet[652]: I1018 15:07:03.519357     652 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 15:07:03 newest-cni-741831 kubelet[652]: I1018 15:07:03.531446     652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/338db4d1-7623-42cc-ac47-40e8f34baf31-xtables-lock\") pod \"kindnet-pj5dl\" (UID: \"338db4d1-7623-42cc-ac47-40e8f34baf31\") " pod="kube-system/kindnet-pj5dl"
	Oct 18 15:07:03 newest-cni-741831 kubelet[652]: I1018 15:07:03.531532     652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e790462a-4f99-4636-aa73-b8cf26812e75-lib-modules\") pod \"kube-proxy-cgl2t\" (UID: \"e790462a-4f99-4636-aa73-b8cf26812e75\") " pod="kube-system/kube-proxy-cgl2t"
	Oct 18 15:07:03 newest-cni-741831 kubelet[652]: I1018 15:07:03.531560     652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e790462a-4f99-4636-aa73-b8cf26812e75-xtables-lock\") pod \"kube-proxy-cgl2t\" (UID: \"e790462a-4f99-4636-aa73-b8cf26812e75\") " pod="kube-system/kube-proxy-cgl2t"
	Oct 18 15:07:03 newest-cni-741831 kubelet[652]: I1018 15:07:03.531620     652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/338db4d1-7623-42cc-ac47-40e8f34baf31-cni-cfg\") pod \"kindnet-pj5dl\" (UID: \"338db4d1-7623-42cc-ac47-40e8f34baf31\") " pod="kube-system/kindnet-pj5dl"
	Oct 18 15:07:03 newest-cni-741831 kubelet[652]: I1018 15:07:03.531674     652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/338db4d1-7623-42cc-ac47-40e8f34baf31-lib-modules\") pod \"kindnet-pj5dl\" (UID: \"338db4d1-7623-42cc-ac47-40e8f34baf31\") " pod="kube-system/kindnet-pj5dl"
	Oct 18 15:07:05 newest-cni-741831 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 15:07:05 newest-cni-741831 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 15:07:05 newest-cni-741831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-741831 -n newest-cni-741831
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-741831 -n newest-cni-741831: exit status 2 (416.348797ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-741831 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-dksbs storage-provisioner dashboard-metrics-scraper-6ffb444bf9-g8lc4 kubernetes-dashboard-855c9754f9-dtpqf
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-741831 describe pod coredns-66bc5c9577-dksbs storage-provisioner dashboard-metrics-scraper-6ffb444bf9-g8lc4 kubernetes-dashboard-855c9754f9-dtpqf
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-741831 describe pod coredns-66bc5c9577-dksbs storage-provisioner dashboard-metrics-scraper-6ffb444bf9-g8lc4 kubernetes-dashboard-855c9754f9-dtpqf: exit status 1 (88.756612ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-dksbs" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-g8lc4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-dtpqf" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-741831 describe pod coredns-66bc5c9577-dksbs storage-provisioner dashboard-metrics-scraper-6ffb444bf9-g8lc4 kubernetes-dashboard-855c9754f9-dtpqf: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-775590 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-775590 --alsologtostderr -v=1: exit status 80 (1.780662727s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-775590 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 15:07:39.428530  375247 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:07:39.428840  375247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:07:39.428853  375247 out.go:374] Setting ErrFile to fd 2...
	I1018 15:07:39.428859  375247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:07:39.429173  375247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:07:39.429504  375247 out.go:368] Setting JSON to false
	I1018 15:07:39.429560  375247 mustload.go:65] Loading cluster: embed-certs-775590
	I1018 15:07:39.429977  375247 config.go:182] Loaded profile config "embed-certs-775590": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:39.430370  375247 cli_runner.go:164] Run: docker container inspect embed-certs-775590 --format={{.State.Status}}
	I1018 15:07:39.447594  375247 host.go:66] Checking if "embed-certs-775590" exists ...
	I1018 15:07:39.447898  375247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:07:39.510999  375247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-18 15:07:39.499202498 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:07:39.511614  375247 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-775590 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 15:07:39.513390  375247 out.go:179] * Pausing node embed-certs-775590 ... 
	I1018 15:07:39.514726  375247 host.go:66] Checking if "embed-certs-775590" exists ...
	I1018 15:07:39.515042  375247 ssh_runner.go:195] Run: systemctl --version
	I1018 15:07:39.515088  375247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-775590
	I1018 15:07:39.532177  375247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/embed-certs-775590/id_rsa Username:docker}
	I1018 15:07:39.629894  375247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:07:39.642793  375247 pause.go:52] kubelet running: true
	I1018 15:07:39.642876  375247 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:07:39.806942  375247 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:07:39.807025  375247 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:07:39.879819  375247 cri.go:89] found id: "9c35dfe066e80a5d3e0a701c2875c46b723714dbbc466e10be5dd5abc8352ecd"
	I1018 15:07:39.879848  375247 cri.go:89] found id: "e9ed17ebe9d6e41129b3293acffeecd329c3a79689e63102b6194a572f14b893"
	I1018 15:07:39.879853  375247 cri.go:89] found id: "9c9aaeaf481f15d9001d08c681045b2b41d6acb97974d97e2be7e59590898211"
	I1018 15:07:39.879857  375247 cri.go:89] found id: "a503efb2ea9381b9c5fa4f5b26e57f3c807643c204fab83d6d48c48330820b57"
	I1018 15:07:39.879861  375247 cri.go:89] found id: "1f11860acba6b353b37043c9600e22e539776e34b5ceb6d65aa1f9742fa2a461"
	I1018 15:07:39.879866  375247 cri.go:89] found id: "8dbbbc5ba968b1ba56a06c344a32c3c030795f38bce0c95c907aa5896a4bb7f0"
	I1018 15:07:39.879870  375247 cri.go:89] found id: "7dac5e4ff28c655ac1e75121254546efea7aeb21f3f1842322ce82ba42dafce6"
	I1018 15:07:39.879874  375247 cri.go:89] found id: "391f2be1a0cb010a611fea801cf28a9d37af079421a87d50d1a13033b93f5316"
	I1018 15:07:39.879899  375247 cri.go:89] found id: "65178e05fb2051f87794f11a491ebb47135644c26089b48edd847c231777d3ce"
	I1018 15:07:39.879907  375247 cri.go:89] found id: "cfbdaedc4f8219ee6d0c2d1a4682d21b8f3ebc0449f3966109dd5720229923a2"
	I1018 15:07:39.879921  375247 cri.go:89] found id: "7832e0abf4afc353da085c8c8070f3929d57ca1ce8ed56737bd8d3f1433ad26f"
	I1018 15:07:39.879925  375247 cri.go:89] found id: ""
	I1018 15:07:39.879988  375247 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:07:39.898841  375247 retry.go:31] will retry after 360.555419ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:07:39Z" level=error msg="open /run/runc: no such file or directory"
	I1018 15:07:40.260589  375247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:07:40.275216  375247 pause.go:52] kubelet running: false
	I1018 15:07:40.275292  375247 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:07:40.439172  375247 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:07:40.439258  375247 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:07:40.510763  375247 cri.go:89] found id: "9c35dfe066e80a5d3e0a701c2875c46b723714dbbc466e10be5dd5abc8352ecd"
	I1018 15:07:40.510789  375247 cri.go:89] found id: "e9ed17ebe9d6e41129b3293acffeecd329c3a79689e63102b6194a572f14b893"
	I1018 15:07:40.510795  375247 cri.go:89] found id: "9c9aaeaf481f15d9001d08c681045b2b41d6acb97974d97e2be7e59590898211"
	I1018 15:07:40.510801  375247 cri.go:89] found id: "a503efb2ea9381b9c5fa4f5b26e57f3c807643c204fab83d6d48c48330820b57"
	I1018 15:07:40.510805  375247 cri.go:89] found id: "1f11860acba6b353b37043c9600e22e539776e34b5ceb6d65aa1f9742fa2a461"
	I1018 15:07:40.510810  375247 cri.go:89] found id: "8dbbbc5ba968b1ba56a06c344a32c3c030795f38bce0c95c907aa5896a4bb7f0"
	I1018 15:07:40.510814  375247 cri.go:89] found id: "7dac5e4ff28c655ac1e75121254546efea7aeb21f3f1842322ce82ba42dafce6"
	I1018 15:07:40.510819  375247 cri.go:89] found id: "391f2be1a0cb010a611fea801cf28a9d37af079421a87d50d1a13033b93f5316"
	I1018 15:07:40.510824  375247 cri.go:89] found id: "65178e05fb2051f87794f11a491ebb47135644c26089b48edd847c231777d3ce"
	I1018 15:07:40.510839  375247 cri.go:89] found id: "cfbdaedc4f8219ee6d0c2d1a4682d21b8f3ebc0449f3966109dd5720229923a2"
	I1018 15:07:40.510843  375247 cri.go:89] found id: "7832e0abf4afc353da085c8c8070f3929d57ca1ce8ed56737bd8d3f1433ad26f"
	I1018 15:07:40.510847  375247 cri.go:89] found id: ""
	I1018 15:07:40.510886  375247 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:07:40.523366  375247 retry.go:31] will retry after 359.383373ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:07:40Z" level=error msg="open /run/runc: no such file or directory"
	I1018 15:07:40.882992  375247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:07:40.900221  375247 pause.go:52] kubelet running: false
	I1018 15:07:40.900283  375247 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:07:41.060029  375247 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:07:41.060137  375247 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:07:41.134421  375247 cri.go:89] found id: "9c35dfe066e80a5d3e0a701c2875c46b723714dbbc466e10be5dd5abc8352ecd"
	I1018 15:07:41.134446  375247 cri.go:89] found id: "e9ed17ebe9d6e41129b3293acffeecd329c3a79689e63102b6194a572f14b893"
	I1018 15:07:41.134452  375247 cri.go:89] found id: "9c9aaeaf481f15d9001d08c681045b2b41d6acb97974d97e2be7e59590898211"
	I1018 15:07:41.134455  375247 cri.go:89] found id: "a503efb2ea9381b9c5fa4f5b26e57f3c807643c204fab83d6d48c48330820b57"
	I1018 15:07:41.134458  375247 cri.go:89] found id: "1f11860acba6b353b37043c9600e22e539776e34b5ceb6d65aa1f9742fa2a461"
	I1018 15:07:41.134461  375247 cri.go:89] found id: "8dbbbc5ba968b1ba56a06c344a32c3c030795f38bce0c95c907aa5896a4bb7f0"
	I1018 15:07:41.134464  375247 cri.go:89] found id: "7dac5e4ff28c655ac1e75121254546efea7aeb21f3f1842322ce82ba42dafce6"
	I1018 15:07:41.134467  375247 cri.go:89] found id: "391f2be1a0cb010a611fea801cf28a9d37af079421a87d50d1a13033b93f5316"
	I1018 15:07:41.134469  375247 cri.go:89] found id: "65178e05fb2051f87794f11a491ebb47135644c26089b48edd847c231777d3ce"
	I1018 15:07:41.134474  375247 cri.go:89] found id: "cfbdaedc4f8219ee6d0c2d1a4682d21b8f3ebc0449f3966109dd5720229923a2"
	I1018 15:07:41.134477  375247 cri.go:89] found id: "7832e0abf4afc353da085c8c8070f3929d57ca1ce8ed56737bd8d3f1433ad26f"
	I1018 15:07:41.134480  375247 cri.go:89] found id: ""
	I1018 15:07:41.134528  375247 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:07:41.148570  375247 out.go:203] 
	W1018 15:07:41.150045  375247 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:07:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:07:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 15:07:41.150065  375247 out.go:285] * 
	* 
	W1018 15:07:41.156108  375247 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 15:07:41.157582  375247 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-775590 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-775590
helpers_test.go:243: (dbg) docker inspect embed-certs-775590:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136",
	        "Created": "2025-10-18T15:05:37.66682901Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 358710,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T15:06:41.69658099Z",
	            "FinishedAt": "2025-10-18T15:06:40.688530406Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136/hosts",
	        "LogPath": "/var/lib/docker/containers/fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136/fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136-json.log",
	        "Name": "/embed-certs-775590",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-775590:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-775590",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136",
	                "LowerDir": "/var/lib/docker/overlay2/0d871763812daaa41f53c8f03bf2ffa741ccbb0d8567e7bd1a04c83675b7f50c-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d871763812daaa41f53c8f03bf2ffa741ccbb0d8567e7bd1a04c83675b7f50c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d871763812daaa41f53c8f03bf2ffa741ccbb0d8567e7bd1a04c83675b7f50c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d871763812daaa41f53c8f03bf2ffa741ccbb0d8567e7bd1a04c83675b7f50c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-775590",
	                "Source": "/var/lib/docker/volumes/embed-certs-775590/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-775590",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-775590",
	                "name.minikube.sigs.k8s.io": "embed-certs-775590",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4649ebc5780875666188f1bbd4e2909c6e96b2e008a578b63c3fa62a388f8a5b",
	            "SandboxKey": "/var/run/docker/netns/4649ebc57808",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-775590": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:26:4c:bc:8a:43",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4b571e6f85a52c5072615169054e56aacc55a5a837ed83f6fbbd0772adfae9a2",
	                    "EndpointID": "0120fdae3fefcd80e952c09cf9319088ecc404c8cba6a4526a533ab423cf2917",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-775590",
	                        "fe1c521b2804"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-775590 -n embed-certs-775590
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-775590 -n embed-certs-775590: exit status 2 (353.385177ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-775590 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-775590 logs -n 25: (1.359929399s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-775590 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ stop    │ -p embed-certs-775590 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ image   │ no-preload-165275 image list --format=json                                                                                                                                                                                                    │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ pause   │ -p no-preload-165275 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-489104 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-775590 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ delete  │ -p no-preload-165275                                                                                                                                                                                                                          │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p embed-certs-775590 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:07 UTC │
	│ stop    │ -p default-k8s-diff-port-489104 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ delete  │ -p no-preload-165275                                                                                                                                                                                                                          │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p auto-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:07 UTC │
	│ addons  │ enable metrics-server -p newest-cni-741831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ stop    │ -p newest-cni-741831 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-741831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p newest-cni-741831 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:07 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-489104 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p default-k8s-diff-port-489104 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ image   │ newest-cni-741831 image list --format=json                                                                                                                                                                                                    │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ pause   │ -p newest-cni-741831 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ delete  │ -p newest-cni-741831                                                                                                                                                                                                                          │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ delete  │ -p newest-cni-741831                                                                                                                                                                                                                          │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ start   │ -p kindnet-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-034446               │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ ssh     │ -p auto-034446 pgrep -a kubelet                                                                                                                                                                                                               │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ image   │ embed-certs-775590 image list --format=json                                                                                                                                                                                                   │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ pause   │ -p embed-certs-775590 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:07:13
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:07:13.891433  371660 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:07:13.891707  371660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:07:13.891719  371660 out.go:374] Setting ErrFile to fd 2...
	I1018 15:07:13.891723  371660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:07:13.891958  371660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:07:13.892459  371660 out.go:368] Setting JSON to false
	I1018 15:07:13.893828  371660 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10185,"bootTime":1760789849,"procs":380,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:07:13.893940  371660 start.go:141] virtualization: kvm guest
	I1018 15:07:13.895810  371660 out.go:179] * [kindnet-034446] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:07:13.897049  371660 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:07:13.897079  371660 notify.go:220] Checking for updates...
	I1018 15:07:13.899256  371660 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:07:13.900403  371660 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:07:13.901420  371660 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 15:07:13.902716  371660 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:07:13.903989  371660 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:07:13.905618  371660 config.go:182] Loaded profile config "auto-034446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:13.905758  371660 config.go:182] Loaded profile config "default-k8s-diff-port-489104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:13.905855  371660 config.go:182] Loaded profile config "embed-certs-775590": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:13.905988  371660 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:07:13.933614  371660 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:07:13.933799  371660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:07:13.995548  371660 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 15:07:13.984879123 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:07:13.995654  371660 docker.go:318] overlay module found
	I1018 15:07:13.997457  371660 out.go:179] * Using the docker driver based on user configuration
	I1018 15:07:13.998521  371660 start.go:305] selected driver: docker
	I1018 15:07:13.998536  371660 start.go:925] validating driver "docker" against <nil>
	I1018 15:07:13.998548  371660 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:07:13.999196  371660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:07:14.063388  371660 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 15:07:14.053098038 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:07:14.063670  371660 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 15:07:14.064011  371660 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:07:14.065819  371660 out.go:179] * Using Docker driver with root privileges
	I1018 15:07:14.067001  371660 cni.go:84] Creating CNI manager for "kindnet"
	I1018 15:07:14.067020  371660 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 15:07:14.067084  371660 start.go:349] cluster config:
	{Name:kindnet-034446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-034446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:07:14.068311  371660 out.go:179] * Starting "kindnet-034446" primary control-plane node in "kindnet-034446" cluster
	I1018 15:07:14.069358  371660 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:07:14.070432  371660 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:07:14.071382  371660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:07:14.071425  371660 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 15:07:14.071440  371660 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 15:07:14.071449  371660 cache.go:58] Caching tarball of preloaded images
	I1018 15:07:14.071582  371660 preload.go:233] Found /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 15:07:14.071601  371660 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 15:07:14.071778  371660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/config.json ...
	I1018 15:07:14.071809  371660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/config.json: {Name:mk2c4feb128cd0dd212b0cdd437b032d8e343a62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:14.093407  371660 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:07:14.093430  371660 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 15:07:14.093450  371660 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:07:14.093482  371660 start.go:360] acquireMachinesLock for kindnet-034446: {Name:mkd12f55bf6b0715c4444b4f1e88494697872916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:07:14.093583  371660 start.go:364] duration metric: took 82.424µs to acquireMachinesLock for "kindnet-034446"
	I1018 15:07:14.093607  371660 start.go:93] Provisioning new machine with config: &{Name:kindnet-034446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-034446 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:07:14.093682  371660 start.go:125] createHost starting for "" (driver="docker")
	I1018 15:07:13.006331  359679 addons.go:514] duration metric: took 567.412314ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 15:07:13.277546  359679 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-034446" context rescaled to 1 replicas
	I1018 15:07:11.056123  366690 addons.go:514] duration metric: took 2.506352669s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 15:07:11.535001  366690 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1018 15:07:11.542852  366690 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 15:07:11.542898  366690 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 15:07:12.035624  366690 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1018 15:07:12.039821  366690 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1018 15:07:12.040980  366690 api_server.go:141] control plane version: v1.34.1
	I1018 15:07:12.041008  366690 api_server.go:131] duration metric: took 1.006160505s to wait for apiserver health ...
	I1018 15:07:12.041019  366690 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:07:12.044443  366690 system_pods.go:59] 8 kube-system pods found
	I1018 15:07:12.044474  366690 system_pods.go:61] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:07:12.044482  366690 system_pods.go:61] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 15:07:12.044490  366690 system_pods.go:61] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:07:12.044497  366690 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 15:07:12.044503  366690 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 15:07:12.044508  366690 system_pods.go:61] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:07:12.044514  366690 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 15:07:12.044519  366690 system_pods.go:61] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Running
	I1018 15:07:12.044527  366690 system_pods.go:74] duration metric: took 3.502402ms to wait for pod list to return data ...
	I1018 15:07:12.044537  366690 default_sa.go:34] waiting for default service account to be created ...
	I1018 15:07:12.046824  366690 default_sa.go:45] found service account: "default"
	I1018 15:07:12.046841  366690 default_sa.go:55] duration metric: took 2.299045ms for default service account to be created ...
	I1018 15:07:12.046848  366690 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 15:07:12.049209  366690 system_pods.go:86] 8 kube-system pods found
	I1018 15:07:12.049233  366690 system_pods.go:89] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:07:12.049240  366690 system_pods.go:89] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 15:07:12.049246  366690 system_pods.go:89] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:07:12.049255  366690 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 15:07:12.049263  366690 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 15:07:12.049270  366690 system_pods.go:89] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:07:12.049282  366690 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 15:07:12.049291  366690 system_pods.go:89] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Running
	I1018 15:07:12.049298  366690 system_pods.go:126] duration metric: took 2.444723ms to wait for k8s-apps to be running ...
	I1018 15:07:12.049307  366690 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 15:07:12.049351  366690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:07:12.063514  366690 system_svc.go:56] duration metric: took 14.195315ms WaitForService to wait for kubelet
	I1018 15:07:12.063550  366690 kubeadm.go:586] duration metric: took 3.513827059s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:07:12.063574  366690 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:07:12.066510  366690 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 15:07:12.066533  366690 node_conditions.go:123] node cpu capacity is 8
	I1018 15:07:12.066545  366690 node_conditions.go:105] duration metric: took 2.966469ms to run NodePressure ...
	I1018 15:07:12.066558  366690 start.go:241] waiting for startup goroutines ...
	I1018 15:07:12.066568  366690 start.go:246] waiting for cluster config update ...
	I1018 15:07:12.066581  366690 start.go:255] writing updated cluster config ...
	I1018 15:07:12.066844  366690 ssh_runner.go:195] Run: rm -f paused
	I1018 15:07:12.071208  366690 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:07:12.074708  366690 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dtjgd" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 15:07:14.080392  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:12.620760  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	W1018 15:07:15.118252  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	I1018 15:07:14.096137  371660 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 15:07:14.096350  371660 start.go:159] libmachine.API.Create for "kindnet-034446" (driver="docker")
	I1018 15:07:14.096381  371660 client.go:168] LocalClient.Create starting
	I1018 15:07:14.096445  371660 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem
	I1018 15:07:14.096478  371660 main.go:141] libmachine: Decoding PEM data...
	I1018 15:07:14.096494  371660 main.go:141] libmachine: Parsing certificate...
	I1018 15:07:14.096560  371660 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem
	I1018 15:07:14.096582  371660 main.go:141] libmachine: Decoding PEM data...
	I1018 15:07:14.096593  371660 main.go:141] libmachine: Parsing certificate...
	I1018 15:07:14.096952  371660 cli_runner.go:164] Run: docker network inspect kindnet-034446 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 15:07:14.115492  371660 cli_runner.go:211] docker network inspect kindnet-034446 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 15:07:14.115582  371660 network_create.go:284] running [docker network inspect kindnet-034446] to gather additional debugging logs...
	I1018 15:07:14.115608  371660 cli_runner.go:164] Run: docker network inspect kindnet-034446
	W1018 15:07:14.134826  371660 cli_runner.go:211] docker network inspect kindnet-034446 returned with exit code 1
	I1018 15:07:14.134874  371660 network_create.go:287] error running [docker network inspect kindnet-034446]: docker network inspect kindnet-034446: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-034446 not found
	I1018 15:07:14.134902  371660 network_create.go:289] output of [docker network inspect kindnet-034446]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-034446 not found
	
	** /stderr **
	I1018 15:07:14.135053  371660 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:07:14.153595  371660 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-67ded9675d49 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:eb:89:76:0f:a6} reservation:<nil>}
	I1018 15:07:14.154271  371660 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4b365c92bc46 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:db:b6:83:36:75} reservation:<nil>}
	I1018 15:07:14.154902  371660 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ab6063c7cdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:eb:32:cc:ab:b4} reservation:<nil>}
	I1018 15:07:14.155565  371660 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4b571e6f85a5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:35:91:99:08:5b} reservation:<nil>}
	I1018 15:07:14.156084  371660 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-047ecbec470e IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:d2:7c:b1:87:9d:5b} reservation:<nil>}
	I1018 15:07:14.156891  371660 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020794d0}
	I1018 15:07:14.156957  371660 network_create.go:124] attempt to create docker network kindnet-034446 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1018 15:07:14.157023  371660 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-034446 kindnet-034446
	I1018 15:07:14.232289  371660 network_create.go:108] docker network kindnet-034446 192.168.94.0/24 created
	I1018 15:07:14.232318  371660 kic.go:121] calculated static IP "192.168.94.2" for the "kindnet-034446" container
	I1018 15:07:14.232391  371660 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 15:07:14.255372  371660 cli_runner.go:164] Run: docker volume create kindnet-034446 --label name.minikube.sigs.k8s.io=kindnet-034446 --label created_by.minikube.sigs.k8s.io=true
	I1018 15:07:14.276062  371660 oci.go:103] Successfully created a docker volume kindnet-034446
	I1018 15:07:14.276135  371660 cli_runner.go:164] Run: docker run --rm --name kindnet-034446-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-034446 --entrypoint /usr/bin/test -v kindnet-034446:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 15:07:14.701440  371660 oci.go:107] Successfully prepared a docker volume kindnet-034446
	I1018 15:07:14.701495  371660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:07:14.701522  371660 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 15:07:14.701594  371660 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-034446:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1018 15:07:14.776034  359679 node_ready.go:57] node "auto-034446" has "Ready":"False" status (will retry)
	W1018 15:07:17.275694  359679 node_ready.go:57] node "auto-034446" has "Ready":"False" status (will retry)
	W1018 15:07:19.276554  359679 node_ready.go:57] node "auto-034446" has "Ready":"False" status (will retry)
	W1018 15:07:16.081089  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:18.141776  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:17.617450  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	W1018 15:07:19.625758  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	I1018 15:07:20.047407  371660 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-034446:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.345747651s)
	I1018 15:07:20.047446  371660 kic.go:203] duration metric: took 5.345919503s to extract preloaded images to volume ...
	W1018 15:07:20.047581  371660 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 15:07:20.047641  371660 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 15:07:20.047698  371660 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 15:07:20.117198  371660 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-034446 --name kindnet-034446 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-034446 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-034446 --network kindnet-034446 --ip 192.168.94.2 --volume kindnet-034446:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 15:07:20.873259  371660 cli_runner.go:164] Run: docker container inspect kindnet-034446 --format={{.State.Running}}
	I1018 15:07:20.894016  371660 cli_runner.go:164] Run: docker container inspect kindnet-034446 --format={{.State.Status}}
	I1018 15:07:20.915580  371660 cli_runner.go:164] Run: docker exec kindnet-034446 stat /var/lib/dpkg/alternatives/iptables
	I1018 15:07:20.972559  371660 oci.go:144] the created container "kindnet-034446" has a running status.
	I1018 15:07:20.972596  371660 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/kindnet-034446/id_rsa...
	I1018 15:07:21.205424  371660 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-89690/.minikube/machines/kindnet-034446/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 15:07:21.246907  371660 cli_runner.go:164] Run: docker container inspect kindnet-034446 --format={{.State.Status}}
	I1018 15:07:21.268702  371660 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 15:07:21.268729  371660 kic_runner.go:114] Args: [docker exec --privileged kindnet-034446 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 15:07:21.316100  371660 cli_runner.go:164] Run: docker container inspect kindnet-034446 --format={{.State.Status}}
	I1018 15:07:21.338525  371660 machine.go:93] provisionDockerMachine start ...
	I1018 15:07:21.338626  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:21.359771  371660 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:21.360147  371660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 15:07:21.360168  371660 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:07:21.498869  371660 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-034446
	
	I1018 15:07:21.498905  371660 ubuntu.go:182] provisioning hostname "kindnet-034446"
	I1018 15:07:21.499001  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:21.516667  371660 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:21.516967  371660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 15:07:21.516987  371660 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-034446 && echo "kindnet-034446" | sudo tee /etc/hostname
	I1018 15:07:21.665958  371660 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-034446
	
	I1018 15:07:21.666077  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:21.687729  371660 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:21.688029  371660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 15:07:21.688053  371660 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-034446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-034446/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-034446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:07:21.830278  371660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:07:21.830314  371660 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 15:07:21.830340  371660 ubuntu.go:190] setting up certificates
	I1018 15:07:21.830360  371660 provision.go:84] configureAuth start
	I1018 15:07:21.830459  371660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-034446
	I1018 15:07:21.850279  371660 provision.go:143] copyHostCerts
	I1018 15:07:21.850349  371660 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem, removing ...
	I1018 15:07:21.850358  371660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:07:21.850410  371660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 15:07:21.850489  371660 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem, removing ...
	I1018 15:07:21.850497  371660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:07:21.850521  371660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 15:07:21.850576  371660 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem, removing ...
	I1018 15:07:21.850583  371660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:07:21.850605  371660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 15:07:21.850659  371660 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.kindnet-034446 san=[127.0.0.1 192.168.94.2 kindnet-034446 localhost minikube]
	I1018 15:07:22.039444  371660 provision.go:177] copyRemoteCerts
	I1018 15:07:22.039621  371660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:07:22.039684  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:22.057394  371660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/kindnet-034446/id_rsa Username:docker}
	I1018 15:07:22.154772  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:07:22.174888  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1018 15:07:22.192136  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 15:07:22.209516  371660 provision.go:87] duration metric: took 379.142168ms to configureAuth
	I1018 15:07:22.209543  371660 ubuntu.go:206] setting minikube options for container-runtime
	I1018 15:07:22.209717  371660 config.go:182] Loaded profile config "kindnet-034446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:22.209837  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:22.227416  371660 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:22.227623  371660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 15:07:22.227639  371660 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:07:22.475966  371660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:07:22.475995  371660 machine.go:96] duration metric: took 1.137446715s to provisionDockerMachine
	I1018 15:07:22.476005  371660 client.go:171] duration metric: took 8.37961583s to LocalClient.Create
	I1018 15:07:22.476024  371660 start.go:167] duration metric: took 8.379677213s to libmachine.API.Create "kindnet-034446"
	I1018 15:07:22.476031  371660 start.go:293] postStartSetup for "kindnet-034446" (driver="docker")
	I1018 15:07:22.476040  371660 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:07:22.476109  371660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:07:22.476149  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:22.493244  371660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/kindnet-034446/id_rsa Username:docker}
	I1018 15:07:22.591112  371660 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:07:22.594514  371660 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 15:07:22.594540  371660 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 15:07:22.594552  371660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/addons for local assets ...
	I1018 15:07:22.594606  371660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/files for local assets ...
	I1018 15:07:22.594676  371660 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem -> 931872.pem in /etc/ssl/certs
	I1018 15:07:22.594785  371660 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:07:22.602370  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:07:22.623276  371660 start.go:296] duration metric: took 147.22765ms for postStartSetup
	I1018 15:07:22.623653  371660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-034446
	I1018 15:07:22.641709  371660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/config.json ...
	I1018 15:07:22.642064  371660 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 15:07:22.642114  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:22.661554  371660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/kindnet-034446/id_rsa Username:docker}
	I1018 15:07:22.755222  371660 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 15:07:22.759746  371660 start.go:128] duration metric: took 8.666049457s to createHost
	I1018 15:07:22.759771  371660 start.go:83] releasing machines lock for "kindnet-034446", held for 8.666176064s
	I1018 15:07:22.759838  371660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-034446
	I1018 15:07:22.777202  371660 ssh_runner.go:195] Run: cat /version.json
	I1018 15:07:22.777305  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:22.777337  371660 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:07:22.777402  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:22.798377  371660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/kindnet-034446/id_rsa Username:docker}
	I1018 15:07:22.799823  371660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/kindnet-034446/id_rsa Username:docker}
	I1018 15:07:22.902467  371660 ssh_runner.go:195] Run: systemctl --version
	I1018 15:07:22.959807  371660 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:07:22.995063  371660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:07:22.999786  371660 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:07:22.999854  371660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:07:23.025679  371660 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 15:07:23.025708  371660 start.go:495] detecting cgroup driver to use...
	I1018 15:07:23.025743  371660 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 15:07:23.025798  371660 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:07:23.041511  371660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:07:23.054209  371660 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:07:23.054288  371660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:07:23.070759  371660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:07:23.090926  371660 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:07:23.175598  371660 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:07:23.263720  371660 docker.go:234] disabling docker service ...
	I1018 15:07:23.263787  371660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:07:23.282658  371660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:07:23.295540  371660 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:07:23.381560  371660 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:07:23.465313  371660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:07:23.478568  371660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:07:23.492560  371660 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 15:07:23.492632  371660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:23.502731  371660 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 15:07:23.502799  371660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:23.511076  371660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:23.519684  371660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:23.528239  371660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:07:23.536295  371660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:23.546252  371660 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:23.559726  371660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:23.568173  371660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:07:23.575309  371660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:07:23.583469  371660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:07:23.666297  371660 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:07:23.771950  371660 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:07:23.772024  371660 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:07:23.776411  371660 start.go:563] Will wait 60s for crictl version
	I1018 15:07:23.776467  371660 ssh_runner.go:195] Run: which crictl
	I1018 15:07:23.780881  371660 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 15:07:23.813333  371660 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 15:07:23.813424  371660 ssh_runner.go:195] Run: crio --version
	I1018 15:07:23.847150  371660 ssh_runner.go:195] Run: crio --version
	I1018 15:07:23.877843  371660 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 15:07:23.879060  371660 cli_runner.go:164] Run: docker network inspect kindnet-034446 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 15:07:21.776334  359679 node_ready.go:57] node "auto-034446" has "Ready":"False" status (will retry)
	I1018 15:07:24.275871  359679 node_ready.go:49] node "auto-034446" is "Ready"
	I1018 15:07:24.275898  359679 node_ready.go:38] duration metric: took 11.503815451s for node "auto-034446" to be "Ready" ...
	I1018 15:07:24.275928  359679 api_server.go:52] waiting for apiserver process to appear ...
	I1018 15:07:24.275981  359679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:07:24.289006  359679 api_server.go:72] duration metric: took 11.850113502s to wait for apiserver process to appear ...
	I1018 15:07:24.289032  359679 api_server.go:88] waiting for apiserver healthz status ...
	I1018 15:07:24.289055  359679 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	W1018 15:07:20.581710  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:23.080059  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:22.116980  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	W1018 15:07:24.121383  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	I1018 15:07:26.117417  358344 pod_ready.go:94] pod "coredns-66bc5c9577-4b6bm" is "Ready"
	I1018 15:07:26.117447  358344 pod_ready.go:86] duration metric: took 31.505818539s for pod "coredns-66bc5c9577-4b6bm" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.119875  358344 pod_ready.go:83] waiting for pod "etcd-embed-certs-775590" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.124114  358344 pod_ready.go:94] pod "etcd-embed-certs-775590" is "Ready"
	I1018 15:07:26.124141  358344 pod_ready.go:86] duration metric: took 4.23626ms for pod "etcd-embed-certs-775590" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.126260  358344 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-775590" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.130034  358344 pod_ready.go:94] pod "kube-apiserver-embed-certs-775590" is "Ready"
	I1018 15:07:26.130053  358344 pod_ready.go:86] duration metric: took 3.774362ms for pod "kube-apiserver-embed-certs-775590" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.131946  358344 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-775590" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.315677  358344 pod_ready.go:94] pod "kube-controller-manager-embed-certs-775590" is "Ready"
	I1018 15:07:26.315709  358344 pod_ready.go:86] duration metric: took 183.743174ms for pod "kube-controller-manager-embed-certs-775590" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:24.294020  359679 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 15:07:24.295090  359679 api_server.go:141] control plane version: v1.34.1
	I1018 15:07:24.295112  359679 api_server.go:131] duration metric: took 6.072568ms to wait for apiserver health ...
	I1018 15:07:24.295120  359679 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:07:24.298983  359679 system_pods.go:59] 8 kube-system pods found
	I1018 15:07:24.299020  359679 system_pods.go:61] "coredns-66bc5c9577-j7gm7" [d175b3ed-f479-4f1d-bb1e-b91468314f7b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:07:24.299030  359679 system_pods.go:61] "etcd-auto-034446" [2a93a913-3768-4fc7-942f-3e8917104a32] Running
	I1018 15:07:24.299044  359679 system_pods.go:61] "kindnet-jhq9x" [b65fbd5f-1385-4bc2-b1ab-2903e28cecfb] Running
	I1018 15:07:24.299050  359679 system_pods.go:61] "kube-apiserver-auto-034446" [d2eccfbc-3d64-4b50-a763-d587a5b75732] Running
	I1018 15:07:24.299059  359679 system_pods.go:61] "kube-controller-manager-auto-034446" [050c4bf7-fb90-4b0d-a123-45ec8ff89907] Running
	I1018 15:07:24.299069  359679 system_pods.go:61] "kube-proxy-9xrg6" [947f8c47-d0ad-4d57-afa1-fd655c273e1c] Running
	I1018 15:07:24.299074  359679 system_pods.go:61] "kube-scheduler-auto-034446" [6e1853af-73e5-4d14-8cc9-5c9c52c4baa1] Running
	I1018 15:07:24.299090  359679 system_pods.go:61] "storage-provisioner" [4c96c4f7-24df-4d4a-95fc-4a7afcf80430] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:07:24.299110  359679 system_pods.go:74] duration metric: took 3.975007ms to wait for pod list to return data ...
	I1018 15:07:24.299124  359679 default_sa.go:34] waiting for default service account to be created ...
	I1018 15:07:24.301445  359679 default_sa.go:45] found service account: "default"
	I1018 15:07:24.301467  359679 default_sa.go:55] duration metric: took 2.336234ms for default service account to be created ...
	I1018 15:07:24.301476  359679 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 15:07:24.304133  359679 system_pods.go:86] 8 kube-system pods found
	I1018 15:07:24.304164  359679 system_pods.go:89] "coredns-66bc5c9577-j7gm7" [d175b3ed-f479-4f1d-bb1e-b91468314f7b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:07:24.304172  359679 system_pods.go:89] "etcd-auto-034446" [2a93a913-3768-4fc7-942f-3e8917104a32] Running
	I1018 15:07:24.304180  359679 system_pods.go:89] "kindnet-jhq9x" [b65fbd5f-1385-4bc2-b1ab-2903e28cecfb] Running
	I1018 15:07:24.304188  359679 system_pods.go:89] "kube-apiserver-auto-034446" [d2eccfbc-3d64-4b50-a763-d587a5b75732] Running
	I1018 15:07:24.304193  359679 system_pods.go:89] "kube-controller-manager-auto-034446" [050c4bf7-fb90-4b0d-a123-45ec8ff89907] Running
	I1018 15:07:24.304202  359679 system_pods.go:89] "kube-proxy-9xrg6" [947f8c47-d0ad-4d57-afa1-fd655c273e1c] Running
	I1018 15:07:24.304207  359679 system_pods.go:89] "kube-scheduler-auto-034446" [6e1853af-73e5-4d14-8cc9-5c9c52c4baa1] Running
	I1018 15:07:24.304216  359679 system_pods.go:89] "storage-provisioner" [4c96c4f7-24df-4d4a-95fc-4a7afcf80430] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:07:24.304240  359679 retry.go:31] will retry after 281.74581ms: missing components: kube-dns
	I1018 15:07:24.590201  359679 system_pods.go:86] 8 kube-system pods found
	I1018 15:07:24.590235  359679 system_pods.go:89] "coredns-66bc5c9577-j7gm7" [d175b3ed-f479-4f1d-bb1e-b91468314f7b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:07:24.590241  359679 system_pods.go:89] "etcd-auto-034446" [2a93a913-3768-4fc7-942f-3e8917104a32] Running
	I1018 15:07:24.590247  359679 system_pods.go:89] "kindnet-jhq9x" [b65fbd5f-1385-4bc2-b1ab-2903e28cecfb] Running
	I1018 15:07:24.590250  359679 system_pods.go:89] "kube-apiserver-auto-034446" [d2eccfbc-3d64-4b50-a763-d587a5b75732] Running
	I1018 15:07:24.590253  359679 system_pods.go:89] "kube-controller-manager-auto-034446" [050c4bf7-fb90-4b0d-a123-45ec8ff89907] Running
	I1018 15:07:24.590257  359679 system_pods.go:89] "kube-proxy-9xrg6" [947f8c47-d0ad-4d57-afa1-fd655c273e1c] Running
	I1018 15:07:24.590262  359679 system_pods.go:89] "kube-scheduler-auto-034446" [6e1853af-73e5-4d14-8cc9-5c9c52c4baa1] Running
	I1018 15:07:24.590269  359679 system_pods.go:89] "storage-provisioner" [4c96c4f7-24df-4d4a-95fc-4a7afcf80430] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:07:24.590289  359679 retry.go:31] will retry after 251.940778ms: missing components: kube-dns
	I1018 15:07:24.846109  359679 system_pods.go:86] 8 kube-system pods found
	I1018 15:07:24.846146  359679 system_pods.go:89] "coredns-66bc5c9577-j7gm7" [d175b3ed-f479-4f1d-bb1e-b91468314f7b] Running
	I1018 15:07:24.846154  359679 system_pods.go:89] "etcd-auto-034446" [2a93a913-3768-4fc7-942f-3e8917104a32] Running
	I1018 15:07:24.846159  359679 system_pods.go:89] "kindnet-jhq9x" [b65fbd5f-1385-4bc2-b1ab-2903e28cecfb] Running
	I1018 15:07:24.846164  359679 system_pods.go:89] "kube-apiserver-auto-034446" [d2eccfbc-3d64-4b50-a763-d587a5b75732] Running
	I1018 15:07:24.846169  359679 system_pods.go:89] "kube-controller-manager-auto-034446" [050c4bf7-fb90-4b0d-a123-45ec8ff89907] Running
	I1018 15:07:24.846175  359679 system_pods.go:89] "kube-proxy-9xrg6" [947f8c47-d0ad-4d57-afa1-fd655c273e1c] Running
	I1018 15:07:24.846179  359679 system_pods.go:89] "kube-scheduler-auto-034446" [6e1853af-73e5-4d14-8cc9-5c9c52c4baa1] Running
	I1018 15:07:24.846184  359679 system_pods.go:89] "storage-provisioner" [4c96c4f7-24df-4d4a-95fc-4a7afcf80430] Running
	I1018 15:07:24.846194  359679 system_pods.go:126] duration metric: took 544.711692ms to wait for k8s-apps to be running ...
	I1018 15:07:24.846204  359679 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 15:07:24.846255  359679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:07:24.859825  359679 system_svc.go:56] duration metric: took 13.610777ms WaitForService to wait for kubelet
	I1018 15:07:24.859860  359679 kubeadm.go:586] duration metric: took 12.4209713s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:07:24.859882  359679 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:07:24.863244  359679 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 15:07:24.863278  359679 node_conditions.go:123] node cpu capacity is 8
	I1018 15:07:24.863296  359679 node_conditions.go:105] duration metric: took 3.408412ms to run NodePressure ...
	I1018 15:07:24.863311  359679 start.go:241] waiting for startup goroutines ...
	I1018 15:07:24.863334  359679 start.go:246] waiting for cluster config update ...
	I1018 15:07:24.863351  359679 start.go:255] writing updated cluster config ...
	I1018 15:07:24.863700  359679 ssh_runner.go:195] Run: rm -f paused
	I1018 15:07:24.867845  359679 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:07:24.871735  359679 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j7gm7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:24.876366  359679 pod_ready.go:94] pod "coredns-66bc5c9577-j7gm7" is "Ready"
	I1018 15:07:24.876384  359679 pod_ready.go:86] duration metric: took 4.629258ms for pod "coredns-66bc5c9577-j7gm7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:24.880651  359679 pod_ready.go:83] waiting for pod "etcd-auto-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:24.884526  359679 pod_ready.go:94] pod "etcd-auto-034446" is "Ready"
	I1018 15:07:24.884547  359679 pod_ready.go:86] duration metric: took 3.875476ms for pod "etcd-auto-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:24.886422  359679 pod_ready.go:83] waiting for pod "kube-apiserver-auto-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:24.890095  359679 pod_ready.go:94] pod "kube-apiserver-auto-034446" is "Ready"
	I1018 15:07:24.890115  359679 pod_ready.go:86] duration metric: took 3.67411ms for pod "kube-apiserver-auto-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:24.891811  359679 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:25.272808  359679 pod_ready.go:94] pod "kube-controller-manager-auto-034446" is "Ready"
	I1018 15:07:25.272838  359679 pod_ready.go:86] duration metric: took 381.009525ms for pod "kube-controller-manager-auto-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:25.472997  359679 pod_ready.go:83] waiting for pod "kube-proxy-9xrg6" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:25.872552  359679 pod_ready.go:94] pod "kube-proxy-9xrg6" is "Ready"
	I1018 15:07:25.872585  359679 pod_ready.go:86] duration metric: took 399.564185ms for pod "kube-proxy-9xrg6" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.073777  359679 pod_ready.go:83] waiting for pod "kube-scheduler-auto-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.472545  359679 pod_ready.go:94] pod "kube-scheduler-auto-034446" is "Ready"
	I1018 15:07:26.472572  359679 pod_ready.go:86] duration metric: took 398.765064ms for pod "kube-scheduler-auto-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.472589  359679 pod_ready.go:40] duration metric: took 1.604708988s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:07:26.518392  359679 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:07:26.520329  359679 out.go:179] * Done! kubectl is now configured to use "auto-034446" cluster and "default" namespace by default
	I1018 15:07:26.515976  358344 pod_ready.go:83] waiting for pod "kube-proxy-clcpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.915670  358344 pod_ready.go:94] pod "kube-proxy-clcpk" is "Ready"
	I1018 15:07:26.915710  358344 pod_ready.go:86] duration metric: took 399.704546ms for pod "kube-proxy-clcpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:27.115958  358344 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-775590" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:27.515863  358344 pod_ready.go:94] pod "kube-scheduler-embed-certs-775590" is "Ready"
	I1018 15:07:27.515893  358344 pod_ready.go:86] duration metric: took 399.905934ms for pod "kube-scheduler-embed-certs-775590" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:27.515908  358344 pod_ready.go:40] duration metric: took 32.976617507s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:07:27.588087  358344 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:07:27.589888  358344 out.go:179] * Done! kubectl is now configured to use "embed-certs-775590" cluster and "default" namespace by default
	I1018 15:07:23.896408  371660 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 15:07:23.900765  371660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:07:23.911675  371660 kubeadm.go:883] updating cluster {Name:kindnet-034446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-034446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 15:07:23.911846  371660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:07:23.911947  371660 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:07:23.945020  371660 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:07:23.945044  371660 crio.go:433] Images already preloaded, skipping extraction
	I1018 15:07:23.945093  371660 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:07:23.972204  371660 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:07:23.972230  371660 cache_images.go:85] Images are preloaded, skipping loading
	I1018 15:07:23.972241  371660 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1018 15:07:23.972360  371660 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-034446 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-034446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1018 15:07:23.972449  371660 ssh_runner.go:195] Run: crio config
	I1018 15:07:24.019780  371660 cni.go:84] Creating CNI manager for "kindnet"
	I1018 15:07:24.019810  371660 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 15:07:24.019831  371660 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-034446 NodeName:kindnet-034446 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 15:07:24.020003  371660 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-034446"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 15:07:24.020065  371660 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 15:07:24.028770  371660 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 15:07:24.028828  371660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 15:07:24.036928  371660 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1018 15:07:24.050614  371660 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 15:07:24.066491  371660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1018 15:07:24.079414  371660 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 15:07:24.083341  371660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:07:24.093399  371660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:07:24.194491  371660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:07:24.220282  371660 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446 for IP: 192.168.94.2
	I1018 15:07:24.220306  371660 certs.go:195] generating shared ca certs ...
	I1018 15:07:24.220327  371660 certs.go:227] acquiring lock for ca certs: {Name:mk6d3b4e44856f498157352ffe8bb89d2c7a3998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:24.220495  371660 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key
	I1018 15:07:24.220556  371660 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key
	I1018 15:07:24.220571  371660 certs.go:257] generating profile certs ...
	I1018 15:07:24.220649  371660 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/client.key
	I1018 15:07:24.220673  371660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/client.crt with IP's: []
	I1018 15:07:24.442965  371660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/client.crt ...
	I1018 15:07:24.442993  371660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/client.crt: {Name:mk4954a4e27ec62317870f3520bc2e13d3dc0d01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:24.443169  371660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/client.key ...
	I1018 15:07:24.443184  371660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/client.key: {Name:mk474a6b5c93b148014a0c6e5baf85dab6bd095e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:24.443262  371660 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.key.f33f2128
	I1018 15:07:24.443277  371660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.crt.f33f2128 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1018 15:07:24.954250  371660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.crt.f33f2128 ...
	I1018 15:07:24.954278  371660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.crt.f33f2128: {Name:mkf27c872cee6095c1af96c3010a2644d05d26cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:24.954438  371660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.key.f33f2128 ...
	I1018 15:07:24.954451  371660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.key.f33f2128: {Name:mkef370b8f324961cb64b8eb1d2e4ec9286db57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:24.954560  371660 certs.go:382] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.crt.f33f2128 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.crt
	I1018 15:07:24.954674  371660 certs.go:386] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.key.f33f2128 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.key
	I1018 15:07:24.954735  371660 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/proxy-client.key
	I1018 15:07:24.954750  371660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/proxy-client.crt with IP's: []
	I1018 15:07:25.324614  371660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/proxy-client.crt ...
	I1018 15:07:25.324644  371660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/proxy-client.crt: {Name:mk5f217dde31a4ced1bb6aaab44cb4470463885d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:25.324844  371660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/proxy-client.key ...
	I1018 15:07:25.324862  371660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/proxy-client.key: {Name:mk0f8f2fec5f9b0d4c608f512ef8d8048a5cf396 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:25.325133  371660 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem (1338 bytes)
	W1018 15:07:25.325177  371660 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187_empty.pem, impossibly tiny 0 bytes
	I1018 15:07:25.325194  371660 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 15:07:25.325220  371660 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem (1082 bytes)
	I1018 15:07:25.325248  371660 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem (1123 bytes)
	I1018 15:07:25.325274  371660 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem (1675 bytes)
	I1018 15:07:25.325332  371660 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:07:25.325897  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 15:07:25.345048  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 15:07:25.362660  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 15:07:25.381760  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 15:07:25.399926  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 15:07:25.417830  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 15:07:25.434880  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 15:07:25.452329  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 15:07:25.470795  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem --> /usr/share/ca-certificates/93187.pem (1338 bytes)
	I1018 15:07:25.491476  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /usr/share/ca-certificates/931872.pem (1708 bytes)
	I1018 15:07:25.509337  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 15:07:25.527647  371660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 15:07:25.540582  371660 ssh_runner.go:195] Run: openssl version
	I1018 15:07:25.546869  371660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/931872.pem && ln -fs /usr/share/ca-certificates/931872.pem /etc/ssl/certs/931872.pem"
	I1018 15:07:25.555897  371660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/931872.pem
	I1018 15:07:25.559654  371660 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 14:26 /usr/share/ca-certificates/931872.pem
	I1018 15:07:25.559724  371660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/931872.pem
	I1018 15:07:25.596726  371660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/931872.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 15:07:25.607022  371660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 15:07:25.616789  371660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:07:25.620802  371660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:07:25.620863  371660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:07:25.655008  371660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 15:07:25.664466  371660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93187.pem && ln -fs /usr/share/ca-certificates/93187.pem /etc/ssl/certs/93187.pem"
	I1018 15:07:25.675678  371660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93187.pem
	I1018 15:07:25.679956  371660 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 14:26 /usr/share/ca-certificates/93187.pem
	I1018 15:07:25.680023  371660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93187.pem
	I1018 15:07:25.726061  371660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93187.pem /etc/ssl/certs/51391683.0"
	I1018 15:07:25.735327  371660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 15:07:25.739544  371660 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 15:07:25.739601  371660 kubeadm.go:400] StartCluster: {Name:kindnet-034446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-034446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:07:25.739666  371660 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 15:07:25.739716  371660 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 15:07:25.768810  371660 cri.go:89] found id: ""
	I1018 15:07:25.768876  371660 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 15:07:25.777836  371660 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 15:07:25.786842  371660 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 15:07:25.786902  371660 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 15:07:25.795939  371660 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 15:07:25.795962  371660 kubeadm.go:157] found existing configuration files:
	
	I1018 15:07:25.796013  371660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 15:07:25.805341  371660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 15:07:25.805400  371660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 15:07:25.813858  371660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 15:07:25.822152  371660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 15:07:25.822218  371660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 15:07:25.830422  371660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 15:07:25.838612  371660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 15:07:25.838674  371660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 15:07:25.846478  371660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 15:07:25.854432  371660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 15:07:25.854478  371660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 15:07:25.862525  371660 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 15:07:25.921475  371660 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 15:07:25.986682  371660 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1018 15:07:25.081480  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:27.584594  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:30.080648  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:32.080780  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:34.081420  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	I1018 15:07:35.948611  371660 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 15:07:35.948684  371660 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 15:07:35.948816  371660 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 15:07:35.948905  371660 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 15:07:35.948990  371660 kubeadm.go:318] OS: Linux
	I1018 15:07:35.949080  371660 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 15:07:35.949145  371660 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 15:07:35.949217  371660 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 15:07:35.949285  371660 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 15:07:35.949352  371660 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 15:07:35.949433  371660 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 15:07:35.949514  371660 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 15:07:35.949597  371660 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 15:07:35.949701  371660 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 15:07:35.949841  371660 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 15:07:35.949989  371660 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 15:07:35.950095  371660 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 15:07:35.952335  371660 out.go:252]   - Generating certificates and keys ...
	I1018 15:07:35.952420  371660 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 15:07:35.952501  371660 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 15:07:35.952591  371660 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 15:07:35.952679  371660 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 15:07:35.952754  371660 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 15:07:35.952832  371660 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 15:07:35.952893  371660 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 15:07:35.953107  371660 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [kindnet-034446 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 15:07:35.953198  371660 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 15:07:35.953374  371660 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [kindnet-034446 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 15:07:35.953442  371660 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 15:07:35.953515  371660 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 15:07:35.953557  371660 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 15:07:35.953605  371660 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 15:07:35.953651  371660 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 15:07:35.953697  371660 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 15:07:35.953745  371660 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 15:07:35.953848  371660 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 15:07:35.953899  371660 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 15:07:35.954002  371660 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 15:07:35.954091  371660 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 15:07:35.955672  371660 out.go:252]   - Booting up control plane ...
	I1018 15:07:35.955841  371660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 15:07:35.955974  371660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 15:07:35.956060  371660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 15:07:35.956197  371660 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 15:07:35.956318  371660 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 15:07:35.956496  371660 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 15:07:35.956629  371660 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 15:07:35.956693  371660 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 15:07:35.956890  371660 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 15:07:35.957056  371660 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 15:07:35.957177  371660 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.836732ms
	I1018 15:07:35.957308  371660 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 15:07:35.957421  371660 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1018 15:07:35.957555  371660 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 15:07:35.957666  371660 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 15:07:35.957749  371660 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.848799048s
	I1018 15:07:35.957818  371660 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.452252864s
	I1018 15:07:35.957964  371660 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001395984s
	I1018 15:07:35.958112  371660 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 15:07:35.958254  371660 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 15:07:35.958329  371660 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 15:07:35.958578  371660 kubeadm.go:318] [mark-control-plane] Marking the node kindnet-034446 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 15:07:35.958662  371660 kubeadm.go:318] [bootstrap-token] Using token: zc9t65.l8nr7otv2eknn5q1
	I1018 15:07:35.960548  371660 out.go:252]   - Configuring RBAC rules ...
	I1018 15:07:35.960697  371660 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 15:07:35.960815  371660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 15:07:35.961074  371660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 15:07:35.961270  371660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 15:07:35.961413  371660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 15:07:35.961506  371660 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 15:07:35.961693  371660 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 15:07:35.961756  371660 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 15:07:35.961850  371660 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 15:07:35.961860  371660 kubeadm.go:318] 
	I1018 15:07:35.961969  371660 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 15:07:35.961996  371660 kubeadm.go:318] 
	I1018 15:07:35.962092  371660 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 15:07:35.962100  371660 kubeadm.go:318] 
	I1018 15:07:35.962123  371660 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 15:07:35.962179  371660 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 15:07:35.962224  371660 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 15:07:35.962229  371660 kubeadm.go:318] 
	I1018 15:07:35.962287  371660 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 15:07:35.962298  371660 kubeadm.go:318] 
	I1018 15:07:35.962335  371660 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 15:07:35.962342  371660 kubeadm.go:318] 
	I1018 15:07:35.962393  371660 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 15:07:35.962457  371660 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 15:07:35.962518  371660 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 15:07:35.962523  371660 kubeadm.go:318] 
	I1018 15:07:35.962615  371660 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 15:07:35.962701  371660 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 15:07:35.962708  371660 kubeadm.go:318] 
	I1018 15:07:35.962814  371660 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token zc9t65.l8nr7otv2eknn5q1 \
	I1018 15:07:35.962904  371660 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9845daa2461a34dd973607f8353b114051f1b03075ff9500a74fb21865e04329 \
	I1018 15:07:35.963009  371660 kubeadm.go:318] 	--control-plane 
	I1018 15:07:35.963024  371660 kubeadm.go:318] 
	I1018 15:07:35.963115  371660 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 15:07:35.963122  371660 kubeadm.go:318] 
	I1018 15:07:35.963289  371660 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token zc9t65.l8nr7otv2eknn5q1 \
	I1018 15:07:35.963407  371660 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9845daa2461a34dd973607f8353b114051f1b03075ff9500a74fb21865e04329 
	I1018 15:07:35.963422  371660 cni.go:84] Creating CNI manager for "kindnet"
	I1018 15:07:35.965234  371660 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 15:07:35.967496  371660 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 15:07:35.972601  371660 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 15:07:35.972626  371660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 15:07:35.986768  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 15:07:36.240361  371660 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 15:07:36.240476  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:36.240555  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-034446 minikube.k8s.io/updated_at=2025_10_18T15_07_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=kindnet-034446 minikube.k8s.io/primary=true
	I1018 15:07:36.252479  371660 ops.go:34] apiserver oom_adj: -16
	I1018 15:07:36.348724  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:36.848804  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:37.348781  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:37.849119  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:38.349251  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:38.849124  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1018 15:07:36.580100  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:38.580745  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	I1018 15:07:39.349128  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:39.849576  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:40.349582  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:40.849144  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:41.349514  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:41.420931  371660 kubeadm.go:1113] duration metric: took 5.180515097s to wait for elevateKubeSystemPrivileges
	I1018 15:07:41.420975  371660 kubeadm.go:402] duration metric: took 15.681377419s to StartCluster
	I1018 15:07:41.420999  371660 settings.go:142] acquiring lock: {Name:mk9d9d72167811b3554435e137ed17137bf54a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:41.421127  371660 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:07:41.424048  371660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/kubeconfig: {Name:mkdc6fbc9be8e57447490b30dd5bb0086611876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:41.424391  371660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 15:07:41.424415  371660 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 15:07:41.424391  371660 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:07:41.424506  371660 addons.go:69] Setting storage-provisioner=true in profile "kindnet-034446"
	I1018 15:07:41.424524  371660 addons.go:238] Setting addon storage-provisioner=true in "kindnet-034446"
	I1018 15:07:41.424554  371660 addons.go:69] Setting default-storageclass=true in profile "kindnet-034446"
	I1018 15:07:41.424578  371660 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-034446"
	I1018 15:07:41.424655  371660 config.go:182] Loaded profile config "kindnet-034446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:41.424550  371660 host.go:66] Checking if "kindnet-034446" exists ...
	I1018 15:07:41.425091  371660 cli_runner.go:164] Run: docker container inspect kindnet-034446 --format={{.State.Status}}
	I1018 15:07:41.425896  371660 cli_runner.go:164] Run: docker container inspect kindnet-034446 --format={{.State.Status}}
	I1018 15:07:41.426756  371660 out.go:179] * Verifying Kubernetes components...
	I1018 15:07:41.428337  371660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:07:41.452034  371660 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Oct 18 15:07:14 embed-certs-775590 crio[564]: time="2025-10-18T15:07:14.972053204Z" level=info msg="Started container" PID=1734 containerID=a120a16be10f9e87dda620e5f0086ece9b7dbe4ad07f8baa796dc3721eae54cf description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g/dashboard-metrics-scraper id=a338faf4-d5e5-4ea9-a2b8-63db22eb27e2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=992f13ec69bdeaadbd96ba2b78cefaac27d1d925e6c8f2081787c3e75d7629d1
	Oct 18 15:07:15 embed-certs-775590 crio[564]: time="2025-10-18T15:07:15.035991433Z" level=info msg="Removing container: 1ece75908169b7644f2e638417387757434321c6c7d738a6f5d8344da10a9b97" id=9e9ee831-7fcf-4d58-bf15-199106b3c467 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:07:15 embed-certs-775590 crio[564]: time="2025-10-18T15:07:15.049230878Z" level=info msg="Removed container 1ece75908169b7644f2e638417387757434321c6c7d738a6f5d8344da10a9b97: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g/dashboard-metrics-scraper" id=9e9ee831-7fcf-4d58-bf15-199106b3c467 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.064210816Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=aceed0dc-3257-4ca4-b4be-a2c604da379f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.06523449Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bc66cc32-3781-47b7-9ad2-5a6fe6aad2cb name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.0664211Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=303d4c2a-3f5b-4f7f-97c7-54b8ef190ae6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.066702154Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.071396188Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.07163662Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/254ce3b8fb1b2df3db7cf4b57e42fae6ceb81e68f0474dca68d5be52cec0154e/merged/etc/passwd: no such file or directory"
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.071670395Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/254ce3b8fb1b2df3db7cf4b57e42fae6ceb81e68f0474dca68d5be52cec0154e/merged/etc/group: no such file or directory"
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.071986283Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.094511546Z" level=info msg="Created container 9c35dfe066e80a5d3e0a701c2875c46b723714dbbc466e10be5dd5abc8352ecd: kube-system/storage-provisioner/storage-provisioner" id=303d4c2a-3f5b-4f7f-97c7-54b8ef190ae6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.095218189Z" level=info msg="Starting container: 9c35dfe066e80a5d3e0a701c2875c46b723714dbbc466e10be5dd5abc8352ecd" id=4e88e21c-3409-49ac-b19c-a533d19d8f0e name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.097094435Z" level=info msg="Started container" PID=1748 containerID=9c35dfe066e80a5d3e0a701c2875c46b723714dbbc466e10be5dd5abc8352ecd description=kube-system/storage-provisioner/storage-provisioner id=4e88e21c-3409-49ac-b19c-a533d19d8f0e name=/runtime.v1.RuntimeService/StartContainer sandboxID=dfa52245e14ccecbf2275ba20021dcccbf42895e1508903eb6d83b95e2589857
	Oct 18 15:07:35 embed-certs-775590 crio[564]: time="2025-10-18T15:07:35.91187284Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ade260d1-0ad6-4043-af4f-87a9175a5ebd name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:35 embed-certs-775590 crio[564]: time="2025-10-18T15:07:35.912697447Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=18ad72f9-1a98-4372-995c-e64d1c1ffa5a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:35 embed-certs-775590 crio[564]: time="2025-10-18T15:07:35.913777851Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g/dashboard-metrics-scraper" id=404b8827-c3c3-45f2-91c2-529b006c61d9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:35 embed-certs-775590 crio[564]: time="2025-10-18T15:07:35.914113768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:35 embed-certs-775590 crio[564]: time="2025-10-18T15:07:35.920685313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:35 embed-certs-775590 crio[564]: time="2025-10-18T15:07:35.921394343Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:35 embed-certs-775590 crio[564]: time="2025-10-18T15:07:35.963053176Z" level=info msg="Created container cfbdaedc4f8219ee6d0c2d1a4682d21b8f3ebc0449f3966109dd5720229923a2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g/dashboard-metrics-scraper" id=404b8827-c3c3-45f2-91c2-529b006c61d9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:35 embed-certs-775590 crio[564]: time="2025-10-18T15:07:35.963750089Z" level=info msg="Starting container: cfbdaedc4f8219ee6d0c2d1a4682d21b8f3ebc0449f3966109dd5720229923a2" id=a2067306-f718-4d38-865e-1e6ebc3639da name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:07:35 embed-certs-775590 crio[564]: time="2025-10-18T15:07:35.965891409Z" level=info msg="Started container" PID=1784 containerID=cfbdaedc4f8219ee6d0c2d1a4682d21b8f3ebc0449f3966109dd5720229923a2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g/dashboard-metrics-scraper id=a2067306-f718-4d38-865e-1e6ebc3639da name=/runtime.v1.RuntimeService/StartContainer sandboxID=992f13ec69bdeaadbd96ba2b78cefaac27d1d925e6c8f2081787c3e75d7629d1
	Oct 18 15:07:36 embed-certs-775590 crio[564]: time="2025-10-18T15:07:36.099848858Z" level=info msg="Removing container: a120a16be10f9e87dda620e5f0086ece9b7dbe4ad07f8baa796dc3721eae54cf" id=6c3fb72e-ab5c-4680-93ba-5770a3c8f013 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:07:36 embed-certs-775590 crio[564]: time="2025-10-18T15:07:36.113845734Z" level=info msg="Removed container a120a16be10f9e87dda620e5f0086ece9b7dbe4ad07f8baa796dc3721eae54cf: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g/dashboard-metrics-scraper" id=6c3fb72e-ab5c-4680-93ba-5770a3c8f013 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	cfbdaedc4f821       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago       Exited              dashboard-metrics-scraper   3                   992f13ec69bde       dashboard-metrics-scraper-6ffb444bf9-txp8g   kubernetes-dashboard
	9c35dfe066e80       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   dfa52245e14cc       storage-provisioner                          kube-system
	7832e0abf4afc       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   36 seconds ago      Running             kubernetes-dashboard        0                   aec2b1803dba5       kubernetes-dashboard-855c9754f9-vfwtr        kubernetes-dashboard
	e9ed17ebe9d6e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   720ce375be34c       coredns-66bc5c9577-4b6bm                     kube-system
	0d44aee5b3b14       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   5dedf3eb6ed53       busybox                                      default
	9c9aaeaf481f1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   6b94e410cee70       kindnet-nkkwg                                kube-system
	a503efb2ea938       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   dfa52245e14cc       storage-provisioner                          kube-system
	1f11860acba6b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           48 seconds ago      Running             kube-proxy                  0                   e2d3ed84f46f2       kube-proxy-clcpk                             kube-system
	8dbbbc5ba968b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   dcdd3715e731a       etcd-embed-certs-775590                      kube-system
	7dac5e4ff28c6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   e7a7a988f1ea8       kube-controller-manager-embed-certs-775590   kube-system
	391f2be1a0cb0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           52 seconds ago      Running             kube-apiserver              0                   e7aed843e24dc       kube-apiserver-embed-certs-775590            kube-system
	65178e05fb205       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           52 seconds ago      Running             kube-scheduler              0                   d00e70cc02496       kube-scheduler-embed-certs-775590            kube-system
	
	
	==> coredns [e9ed17ebe9d6e41129b3293acffeecd329c3a79689e63102b6194a572f14b893] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37422 - 22580 "HINFO IN 5490066793616333859.1255217850527147958. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.084629907s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-775590
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-775590
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=embed-certs-775590
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_05_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:05:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-775590
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:07:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:07:23 +0000   Sat, 18 Oct 2025 15:05:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:07:23 +0000   Sat, 18 Oct 2025 15:05:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:07:23 +0000   Sat, 18 Oct 2025 15:05:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 15:07:23 +0000   Sat, 18 Oct 2025 15:06:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-775590
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                df1f36b9-fc29-426b-bde8-96e4a3ead557
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-4b6bm                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-embed-certs-775590                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-nkkwg                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-embed-certs-775590             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-embed-certs-775590    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-clcpk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-embed-certs-775590             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-txp8g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vfwtr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 47s                kube-proxy       
	  Normal  NodeHasSufficientMemory  109s               kubelet          Node embed-certs-775590 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s               kubelet          Node embed-certs-775590 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s               kubelet          Node embed-certs-775590 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s               node-controller  Node embed-certs-775590 event: Registered Node embed-certs-775590 in Controller
	  Normal  NodeReady                92s                kubelet          Node embed-certs-775590 status is now: NodeReady
	  Normal  Starting                 53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 53s)  kubelet          Node embed-certs-775590 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 53s)  kubelet          Node embed-certs-775590 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 53s)  kubelet          Node embed-certs-775590 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                node-controller  Node embed-certs-775590 event: Registered Node embed-certs-775590 in Controller
	
	
	==> dmesg <==
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [8dbbbc5ba968b1ba56a06c344a32c3c030795f38bce0c95c907aa5896a4bb7f0] <==
	{"level":"warn","ts":"2025-10-18T15:06:52.177072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.184002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.199633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.210040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.225671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.234310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.242074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.249046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.257148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.265273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.273560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.287182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.294972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.303825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.312874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.320994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.329153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.339758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.346573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.364106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.374086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.381297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.398265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.405535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.465752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60702","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:07:42 up  2:50,  0 user,  load average: 6.53, 3.82, 2.35
	Linux embed-certs-775590 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9c9aaeaf481f15d9001d08c681045b2b41d6acb97974d97e2be7e59590898211] <==
	I1018 15:06:54.393746       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 15:06:54.394073       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 15:06:54.394264       1 main.go:148] setting mtu 1500 for CNI 
	I1018 15:06:54.394291       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 15:06:54.394405       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T15:06:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 15:06:54.693653       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 15:06:54.693686       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 15:06:54.693701       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 15:06:54.693876       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 15:06:55.189279       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 15:06:55.189312       1 metrics.go:72] Registering metrics
	I1018 15:06:55.189368       1 controller.go:711] "Syncing nftables rules"
	I1018 15:07:04.694853       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 15:07:04.694945       1 main.go:301] handling current node
	I1018 15:07:14.698146       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 15:07:14.698184       1 main.go:301] handling current node
	I1018 15:07:24.693595       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 15:07:24.693637       1 main.go:301] handling current node
	I1018 15:07:34.701003       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 15:07:34.701039       1 main.go:301] handling current node
	
	
	==> kube-apiserver [391f2be1a0cb010a611fea801cf28a9d37af079421a87d50d1a13033b93f5316] <==
	I1018 15:06:52.998791       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 15:06:52.999029       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 15:06:53.000202       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 15:06:53.000698       1 aggregator.go:171] initial CRD sync complete...
	I1018 15:06:53.000963       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 15:06:53.000980       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 15:06:53.000989       1 cache.go:39] Caches are synced for autoregister controller
	I1018 15:06:53.002554       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 15:06:53.002811       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 15:06:53.011805       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 15:06:53.036935       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 15:06:53.046383       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 15:06:53.046440       1 policy_source.go:240] refreshing policies
	I1018 15:06:53.057500       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 15:06:53.314743       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 15:06:53.346365       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 15:06:53.370780       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:06:53.378967       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:06:53.385833       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 15:06:53.416611       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.102.78"}
	I1018 15:06:53.426803       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.179.170"}
	I1018 15:06:53.901246       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:06:56.715508       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 15:06:56.862836       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 15:06:56.963469       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7dac5e4ff28c655ac1e75121254546efea7aeb21f3f1842322ce82ba42dafce6] <==
	I1018 15:06:56.348867       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 15:06:56.350073       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 15:06:56.352375       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 15:06:56.354653       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 15:06:56.355789       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 15:06:56.358053       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 15:06:56.359228       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 15:06:56.359255       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 15:06:56.359304       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 15:06:56.359335       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 15:06:56.359430       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 15:06:56.359690       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 15:06:56.359759       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 15:06:56.359771       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 15:06:56.359816       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 15:06:56.360206       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 15:06:56.360219       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 15:06:56.362500       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 15:06:56.362549       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 15:06:56.364997       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 15:06:56.365016       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 15:06:56.366186       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 15:06:56.367368       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 15:06:56.375833       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 15:06:56.377177       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [1f11860acba6b353b37043c9600e22e539776e34b5ceb6d65aa1f9742fa2a461] <==
	I1018 15:06:54.319237       1 server_linux.go:53] "Using iptables proxy"
	I1018 15:06:54.388061       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 15:06:54.488449       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 15:06:54.488482       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 15:06:54.488584       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 15:06:54.508538       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 15:06:54.508597       1 server_linux.go:132] "Using iptables Proxier"
	I1018 15:06:54.515255       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 15:06:54.515724       1 server.go:527] "Version info" version="v1.34.1"
	I1018 15:06:54.515775       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:06:54.517343       1 config.go:106] "Starting endpoint slice config controller"
	I1018 15:06:54.517359       1 config.go:200] "Starting service config controller"
	I1018 15:06:54.517370       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 15:06:54.517379       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 15:06:54.517635       1 config.go:309] "Starting node config controller"
	I1018 15:06:54.517701       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 15:06:54.517715       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 15:06:54.517901       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 15:06:54.517977       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 15:06:54.617826       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 15:06:54.617847       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 15:06:54.618205       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [65178e05fb2051f87794f11a491ebb47135644c26089b48edd847c231777d3ce] <==
	I1018 15:06:50.879337       1 serving.go:386] Generated self-signed cert in-memory
	W1018 15:06:52.946297       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 15:06:52.946349       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 15:06:52.946364       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 15:06:52.946374       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 15:06:52.983834       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 15:06:52.984264       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:06:52.986896       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:06:52.986949       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:06:52.987304       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 15:06:52.987381       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 15:06:53.087677       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 15:06:57 embed-certs-775590 kubelet[721]: I1018 15:06:57.083511     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g5nr\" (UniqueName: \"kubernetes.io/projected/848bd25d-a835-42b9-b839-ed84777eb911-kube-api-access-5g5nr\") pod \"dashboard-metrics-scraper-6ffb444bf9-txp8g\" (UID: \"848bd25d-a835-42b9-b839-ed84777eb911\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g"
	Oct 18 15:07:00 embed-certs-775590 kubelet[721]: I1018 15:07:00.980199     721 scope.go:117] "RemoveContainer" containerID="097dbcf22388bf577426ae2e1cf215d02a018d3514599ede91ddcaec91f5c0cd"
	Oct 18 15:07:01 embed-certs-775590 kubelet[721]: I1018 15:07:01.985182     721 scope.go:117] "RemoveContainer" containerID="097dbcf22388bf577426ae2e1cf215d02a018d3514599ede91ddcaec91f5c0cd"
	Oct 18 15:07:01 embed-certs-775590 kubelet[721]: I1018 15:07:01.985338     721 scope.go:117] "RemoveContainer" containerID="1ece75908169b7644f2e638417387757434321c6c7d738a6f5d8344da10a9b97"
	Oct 18 15:07:01 embed-certs-775590 kubelet[721]: E1018 15:07:01.985535     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txp8g_kubernetes-dashboard(848bd25d-a835-42b9-b839-ed84777eb911)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g" podUID="848bd25d-a835-42b9-b839-ed84777eb911"
	Oct 18 15:07:02 embed-certs-775590 kubelet[721]: I1018 15:07:02.994812     721 scope.go:117] "RemoveContainer" containerID="1ece75908169b7644f2e638417387757434321c6c7d738a6f5d8344da10a9b97"
	Oct 18 15:07:02 embed-certs-775590 kubelet[721]: E1018 15:07:02.995074     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txp8g_kubernetes-dashboard(848bd25d-a835-42b9-b839-ed84777eb911)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g" podUID="848bd25d-a835-42b9-b839-ed84777eb911"
	Oct 18 15:07:04 embed-certs-775590 kubelet[721]: I1018 15:07:04.478855     721 scope.go:117] "RemoveContainer" containerID="1ece75908169b7644f2e638417387757434321c6c7d738a6f5d8344da10a9b97"
	Oct 18 15:07:04 embed-certs-775590 kubelet[721]: E1018 15:07:04.479763     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txp8g_kubernetes-dashboard(848bd25d-a835-42b9-b839-ed84777eb911)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g" podUID="848bd25d-a835-42b9-b839-ed84777eb911"
	Oct 18 15:07:06 embed-certs-775590 kubelet[721]: I1018 15:07:06.019039     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vfwtr" podStartSLOduration=2.006503487 podStartE2EDuration="10.019015563s" podCreationTimestamp="2025-10-18 15:06:56 +0000 UTC" firstStartedPulling="2025-10-18 15:06:57.278530281 +0000 UTC m=+7.471834583" lastFinishedPulling="2025-10-18 15:07:05.291042351 +0000 UTC m=+15.484346659" observedRunningTime="2025-10-18 15:07:06.018563912 +0000 UTC m=+16.211868224" watchObservedRunningTime="2025-10-18 15:07:06.019015563 +0000 UTC m=+16.212319887"
	Oct 18 15:07:14 embed-certs-775590 kubelet[721]: I1018 15:07:14.909123     721 scope.go:117] "RemoveContainer" containerID="1ece75908169b7644f2e638417387757434321c6c7d738a6f5d8344da10a9b97"
	Oct 18 15:07:15 embed-certs-775590 kubelet[721]: I1018 15:07:15.033992     721 scope.go:117] "RemoveContainer" containerID="1ece75908169b7644f2e638417387757434321c6c7d738a6f5d8344da10a9b97"
	Oct 18 15:07:15 embed-certs-775590 kubelet[721]: I1018 15:07:15.034433     721 scope.go:117] "RemoveContainer" containerID="a120a16be10f9e87dda620e5f0086ece9b7dbe4ad07f8baa796dc3721eae54cf"
	Oct 18 15:07:15 embed-certs-775590 kubelet[721]: E1018 15:07:15.034639     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txp8g_kubernetes-dashboard(848bd25d-a835-42b9-b839-ed84777eb911)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g" podUID="848bd25d-a835-42b9-b839-ed84777eb911"
	Oct 18 15:07:24 embed-certs-775590 kubelet[721]: I1018 15:07:24.479283     721 scope.go:117] "RemoveContainer" containerID="a120a16be10f9e87dda620e5f0086ece9b7dbe4ad07f8baa796dc3721eae54cf"
	Oct 18 15:07:24 embed-certs-775590 kubelet[721]: E1018 15:07:24.479497     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txp8g_kubernetes-dashboard(848bd25d-a835-42b9-b839-ed84777eb911)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g" podUID="848bd25d-a835-42b9-b839-ed84777eb911"
	Oct 18 15:07:25 embed-certs-775590 kubelet[721]: I1018 15:07:25.063693     721 scope.go:117] "RemoveContainer" containerID="a503efb2ea9381b9c5fa4f5b26e57f3c807643c204fab83d6d48c48330820b57"
	Oct 18 15:07:35 embed-certs-775590 kubelet[721]: I1018 15:07:35.911493     721 scope.go:117] "RemoveContainer" containerID="a120a16be10f9e87dda620e5f0086ece9b7dbe4ad07f8baa796dc3721eae54cf"
	Oct 18 15:07:36 embed-certs-775590 kubelet[721]: I1018 15:07:36.098331     721 scope.go:117] "RemoveContainer" containerID="a120a16be10f9e87dda620e5f0086ece9b7dbe4ad07f8baa796dc3721eae54cf"
	Oct 18 15:07:36 embed-certs-775590 kubelet[721]: I1018 15:07:36.098589     721 scope.go:117] "RemoveContainer" containerID="cfbdaedc4f8219ee6d0c2d1a4682d21b8f3ebc0449f3966109dd5720229923a2"
	Oct 18 15:07:36 embed-certs-775590 kubelet[721]: E1018 15:07:36.098802     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txp8g_kubernetes-dashboard(848bd25d-a835-42b9-b839-ed84777eb911)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g" podUID="848bd25d-a835-42b9-b839-ed84777eb911"
	Oct 18 15:07:39 embed-certs-775590 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 15:07:39 embed-certs-775590 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 15:07:39 embed-certs-775590 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 15:07:39 embed-certs-775590 systemd[1]: kubelet.service: Consumed 1.725s CPU time.
	
	
	==> kubernetes-dashboard [7832e0abf4afc353da085c8c8070f3929d57ca1ce8ed56737bd8d3f1433ad26f] <==
	2025/10/18 15:07:05 Starting overwatch
	2025/10/18 15:07:05 Using namespace: kubernetes-dashboard
	2025/10/18 15:07:05 Using in-cluster config to connect to apiserver
	2025/10/18 15:07:05 Using secret token for csrf signing
	2025/10/18 15:07:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 15:07:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 15:07:05 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 15:07:05 Generating JWE encryption key
	2025/10/18 15:07:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 15:07:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 15:07:05 Initializing JWE encryption key from synchronized object
	2025/10/18 15:07:05 Creating in-cluster Sidecar client
	2025/10/18 15:07:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 15:07:05 Serving insecurely on HTTP port: 9090
	2025/10/18 15:07:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [9c35dfe066e80a5d3e0a701c2875c46b723714dbbc466e10be5dd5abc8352ecd] <==
	I1018 15:07:25.110025       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 15:07:25.118812       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 15:07:25.118959       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 15:07:25.121116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:28.576778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:32.838370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:36.437759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:39.491382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:42.514603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:42.520572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 15:07:42.520796       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 15:07:42.521011       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-775590_7c83e531-f167-4419-be16-ec32ca059751!
	I1018 15:07:42.520970       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b555887a-6bab-4008-b93c-f9bed67d8ecd", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-775590_7c83e531-f167-4419-be16-ec32ca059751 became leader
	W1018 15:07:42.523526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:42.530331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 15:07:42.621682       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-775590_7c83e531-f167-4419-be16-ec32ca059751!
	
	
	==> storage-provisioner [a503efb2ea9381b9c5fa4f5b26e57f3c807643c204fab83d6d48c48330820b57] <==
	I1018 15:06:54.284496       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 15:07:24.286311       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-775590 -n embed-certs-775590
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-775590 -n embed-certs-775590: exit status 2 (349.342091ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-775590 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-775590
helpers_test.go:243: (dbg) docker inspect embed-certs-775590:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136",
	        "Created": "2025-10-18T15:05:37.66682901Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 358710,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T15:06:41.69658099Z",
	            "FinishedAt": "2025-10-18T15:06:40.688530406Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136/hosts",
	        "LogPath": "/var/lib/docker/containers/fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136/fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136-json.log",
	        "Name": "/embed-certs-775590",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-775590:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-775590",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fe1c521b280494d3973daf14de894edbc26737bc2d8faadb416622436aa56136",
	                "LowerDir": "/var/lib/docker/overlay2/0d871763812daaa41f53c8f03bf2ffa741ccbb0d8567e7bd1a04c83675b7f50c-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d871763812daaa41f53c8f03bf2ffa741ccbb0d8567e7bd1a04c83675b7f50c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d871763812daaa41f53c8f03bf2ffa741ccbb0d8567e7bd1a04c83675b7f50c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d871763812daaa41f53c8f03bf2ffa741ccbb0d8567e7bd1a04c83675b7f50c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-775590",
	                "Source": "/var/lib/docker/volumes/embed-certs-775590/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-775590",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-775590",
	                "name.minikube.sigs.k8s.io": "embed-certs-775590",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4649ebc5780875666188f1bbd4e2909c6e96b2e008a578b63c3fa62a388f8a5b",
	            "SandboxKey": "/var/run/docker/netns/4649ebc57808",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-775590": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:26:4c:bc:8a:43",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4b571e6f85a52c5072615169054e56aacc55a5a837ed83f6fbbd0772adfae9a2",
	                    "EndpointID": "0120fdae3fefcd80e952c09cf9319088ecc404c8cba6a4526a533ab423cf2917",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-775590",
	                        "fe1c521b2804"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-775590 -n embed-certs-775590
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-775590 -n embed-certs-775590: exit status 2 (327.293168ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-775590 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-775590 logs -n 25: (1.218050627s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-775590 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ delete  │ -p no-preload-165275                                                                                                                                                                                                                          │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p embed-certs-775590 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:07 UTC │
	│ stop    │ -p default-k8s-diff-port-489104 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ delete  │ -p no-preload-165275                                                                                                                                                                                                                          │ no-preload-165275            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p auto-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:07 UTC │
	│ addons  │ enable metrics-server -p newest-cni-741831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ stop    │ -p newest-cni-741831 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-741831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p newest-cni-741831 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:07 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-489104 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │ 18 Oct 25 15:06 UTC │
	│ start   │ -p default-k8s-diff-port-489104 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:06 UTC │                     │
	│ image   │ newest-cni-741831 image list --format=json                                                                                                                                                                                                    │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ pause   │ -p newest-cni-741831 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ delete  │ -p newest-cni-741831                                                                                                                                                                                                                          │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ delete  │ -p newest-cni-741831                                                                                                                                                                                                                          │ newest-cni-741831            │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ start   │ -p kindnet-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-034446               │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ ssh     │ -p auto-034446 pgrep -a kubelet                                                                                                                                                                                                               │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ image   │ embed-certs-775590 image list --format=json                                                                                                                                                                                                   │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ pause   │ -p embed-certs-775590 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ ssh     │ -p auto-034446 sudo cat /etc/nsswitch.conf                                                                                                                                                                                                    │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo cat /etc/hosts                                                                                                                                                                                                            │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo cat /etc/resolv.conf                                                                                                                                                                                                      │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo crictl pods                                                                                                                                                                                                               │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo crictl ps --all                                                                                                                                                                                                           │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:07:13
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:07:13.891433  371660 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:07:13.891707  371660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:07:13.891719  371660 out.go:374] Setting ErrFile to fd 2...
	I1018 15:07:13.891723  371660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:07:13.891958  371660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:07:13.892459  371660 out.go:368] Setting JSON to false
	I1018 15:07:13.893828  371660 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10185,"bootTime":1760789849,"procs":380,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:07:13.893940  371660 start.go:141] virtualization: kvm guest
	I1018 15:07:13.895810  371660 out.go:179] * [kindnet-034446] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:07:13.897049  371660 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:07:13.897079  371660 notify.go:220] Checking for updates...
	I1018 15:07:13.899256  371660 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:07:13.900403  371660 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:07:13.901420  371660 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 15:07:13.902716  371660 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:07:13.903989  371660 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:07:13.905618  371660 config.go:182] Loaded profile config "auto-034446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:13.905758  371660 config.go:182] Loaded profile config "default-k8s-diff-port-489104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:13.905855  371660 config.go:182] Loaded profile config "embed-certs-775590": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:13.905988  371660 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:07:13.933614  371660 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:07:13.933799  371660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:07:13.995548  371660 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 15:07:13.984879123 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:07:13.995654  371660 docker.go:318] overlay module found
	I1018 15:07:13.997457  371660 out.go:179] * Using the docker driver based on user configuration
	I1018 15:07:13.998521  371660 start.go:305] selected driver: docker
	I1018 15:07:13.998536  371660 start.go:925] validating driver "docker" against <nil>
	I1018 15:07:13.998548  371660 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:07:13.999196  371660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:07:14.063388  371660 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 15:07:14.053098038 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:07:14.063670  371660 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 15:07:14.064011  371660 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:07:14.065819  371660 out.go:179] * Using Docker driver with root privileges
	I1018 15:07:14.067001  371660 cni.go:84] Creating CNI manager for "kindnet"
	I1018 15:07:14.067020  371660 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 15:07:14.067084  371660 start.go:349] cluster config:
	{Name:kindnet-034446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-034446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:07:14.068311  371660 out.go:179] * Starting "kindnet-034446" primary control-plane node in "kindnet-034446" cluster
	I1018 15:07:14.069358  371660 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:07:14.070432  371660 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:07:14.071382  371660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:07:14.071425  371660 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 15:07:14.071440  371660 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 15:07:14.071449  371660 cache.go:58] Caching tarball of preloaded images
	I1018 15:07:14.071582  371660 preload.go:233] Found /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 15:07:14.071601  371660 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 15:07:14.071778  371660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/config.json ...
	I1018 15:07:14.071809  371660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/config.json: {Name:mk2c4feb128cd0dd212b0cdd437b032d8e343a62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:14.093407  371660 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:07:14.093430  371660 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 15:07:14.093450  371660 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:07:14.093482  371660 start.go:360] acquireMachinesLock for kindnet-034446: {Name:mkd12f55bf6b0715c4444b4f1e88494697872916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:07:14.093583  371660 start.go:364] duration metric: took 82.424µs to acquireMachinesLock for "kindnet-034446"
	I1018 15:07:14.093607  371660 start.go:93] Provisioning new machine with config: &{Name:kindnet-034446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-034446 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:07:14.093682  371660 start.go:125] createHost starting for "" (driver="docker")
	I1018 15:07:13.006331  359679 addons.go:514] duration metric: took 567.412314ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 15:07:13.277546  359679 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-034446" context rescaled to 1 replicas
	I1018 15:07:11.056123  366690 addons.go:514] duration metric: took 2.506352669s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 15:07:11.535001  366690 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1018 15:07:11.542852  366690 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 15:07:11.542898  366690 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 15:07:12.035624  366690 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1018 15:07:12.039821  366690 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1018 15:07:12.040980  366690 api_server.go:141] control plane version: v1.34.1
	I1018 15:07:12.041008  366690 api_server.go:131] duration metric: took 1.006160505s to wait for apiserver health ...
	I1018 15:07:12.041019  366690 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:07:12.044443  366690 system_pods.go:59] 8 kube-system pods found
	I1018 15:07:12.044474  366690 system_pods.go:61] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:07:12.044482  366690 system_pods.go:61] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 15:07:12.044490  366690 system_pods.go:61] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:07:12.044497  366690 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 15:07:12.044503  366690 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 15:07:12.044508  366690 system_pods.go:61] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:07:12.044514  366690 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 15:07:12.044519  366690 system_pods.go:61] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Running
	I1018 15:07:12.044527  366690 system_pods.go:74] duration metric: took 3.502402ms to wait for pod list to return data ...
	I1018 15:07:12.044537  366690 default_sa.go:34] waiting for default service account to be created ...
	I1018 15:07:12.046824  366690 default_sa.go:45] found service account: "default"
	I1018 15:07:12.046841  366690 default_sa.go:55] duration metric: took 2.299045ms for default service account to be created ...
	I1018 15:07:12.046848  366690 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 15:07:12.049209  366690 system_pods.go:86] 8 kube-system pods found
	I1018 15:07:12.049233  366690 system_pods.go:89] "coredns-66bc5c9577-dtjgd" [c5abd8b2-0b16-413a-893e-e2d2f9e13f7d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:07:12.049240  366690 system_pods.go:89] "etcd-default-k8s-diff-port-489104" [b6395c8b-3b26-4fd9-b88c-4bad5a3442c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 15:07:12.049246  366690 system_pods.go:89] "kindnet-nvnw6" [7345c2df-3019-4a83-96fc-e02f3704703c] Running
	I1018 15:07:12.049255  366690 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-489104" [46302543-480b-438b-91c6-0bfe090ad2f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 15:07:12.049263  366690 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-489104" [d1fd3d43-7ab8-4907-a606-1922b243139b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 15:07:12.049270  366690 system_pods.go:89] "kube-proxy-7wbfs" [fad0f99a-9792-4603-b5d4-fa7c4c309448] Running
	I1018 15:07:12.049282  366690 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-489104" [17cc10df-07fc-4a42-a159-3eee444487ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 15:07:12.049291  366690 system_pods.go:89] "storage-provisioner" [ae9fdc8f-0be0-4641-abed-fbbfb8e6b466] Running
	I1018 15:07:12.049298  366690 system_pods.go:126] duration metric: took 2.444723ms to wait for k8s-apps to be running ...
	I1018 15:07:12.049307  366690 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 15:07:12.049351  366690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:07:12.063514  366690 system_svc.go:56] duration metric: took 14.195315ms WaitForService to wait for kubelet
	I1018 15:07:12.063550  366690 kubeadm.go:586] duration metric: took 3.513827059s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:07:12.063574  366690 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:07:12.066510  366690 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 15:07:12.066533  366690 node_conditions.go:123] node cpu capacity is 8
	I1018 15:07:12.066545  366690 node_conditions.go:105] duration metric: took 2.966469ms to run NodePressure ...
	I1018 15:07:12.066558  366690 start.go:241] waiting for startup goroutines ...
	I1018 15:07:12.066568  366690 start.go:246] waiting for cluster config update ...
	I1018 15:07:12.066581  366690 start.go:255] writing updated cluster config ...
	I1018 15:07:12.066844  366690 ssh_runner.go:195] Run: rm -f paused
	I1018 15:07:12.071208  366690 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:07:12.074708  366690 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dtjgd" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 15:07:14.080392  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:12.620760  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	W1018 15:07:15.118252  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	I1018 15:07:14.096137  371660 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 15:07:14.096350  371660 start.go:159] libmachine.API.Create for "kindnet-034446" (driver="docker")
	I1018 15:07:14.096381  371660 client.go:168] LocalClient.Create starting
	I1018 15:07:14.096445  371660 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem
	I1018 15:07:14.096478  371660 main.go:141] libmachine: Decoding PEM data...
	I1018 15:07:14.096494  371660 main.go:141] libmachine: Parsing certificate...
	I1018 15:07:14.096560  371660 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem
	I1018 15:07:14.096582  371660 main.go:141] libmachine: Decoding PEM data...
	I1018 15:07:14.096593  371660 main.go:141] libmachine: Parsing certificate...
	I1018 15:07:14.096952  371660 cli_runner.go:164] Run: docker network inspect kindnet-034446 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 15:07:14.115492  371660 cli_runner.go:211] docker network inspect kindnet-034446 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 15:07:14.115582  371660 network_create.go:284] running [docker network inspect kindnet-034446] to gather additional debugging logs...
	I1018 15:07:14.115608  371660 cli_runner.go:164] Run: docker network inspect kindnet-034446
	W1018 15:07:14.134826  371660 cli_runner.go:211] docker network inspect kindnet-034446 returned with exit code 1
	I1018 15:07:14.134874  371660 network_create.go:287] error running [docker network inspect kindnet-034446]: docker network inspect kindnet-034446: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-034446 not found
	I1018 15:07:14.134902  371660 network_create.go:289] output of [docker network inspect kindnet-034446]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-034446 not found
	
	** /stderr **
	I1018 15:07:14.135053  371660 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:07:14.153595  371660 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-67ded9675d49 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:eb:89:76:0f:a6} reservation:<nil>}
	I1018 15:07:14.154271  371660 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4b365c92bc46 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:db:b6:83:36:75} reservation:<nil>}
	I1018 15:07:14.154902  371660 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ab6063c7cdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:eb:32:cc:ab:b4} reservation:<nil>}
	I1018 15:07:14.155565  371660 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4b571e6f85a5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:35:91:99:08:5b} reservation:<nil>}
	I1018 15:07:14.156084  371660 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-047ecbec470e IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:d2:7c:b1:87:9d:5b} reservation:<nil>}
	I1018 15:07:14.156891  371660 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020794d0}
	I1018 15:07:14.156957  371660 network_create.go:124] attempt to create docker network kindnet-034446 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1018 15:07:14.157023  371660 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-034446 kindnet-034446
	I1018 15:07:14.232289  371660 network_create.go:108] docker network kindnet-034446 192.168.94.0/24 created
	I1018 15:07:14.232318  371660 kic.go:121] calculated static IP "192.168.94.2" for the "kindnet-034446" container
	I1018 15:07:14.232391  371660 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 15:07:14.255372  371660 cli_runner.go:164] Run: docker volume create kindnet-034446 --label name.minikube.sigs.k8s.io=kindnet-034446 --label created_by.minikube.sigs.k8s.io=true
	I1018 15:07:14.276062  371660 oci.go:103] Successfully created a docker volume kindnet-034446
	I1018 15:07:14.276135  371660 cli_runner.go:164] Run: docker run --rm --name kindnet-034446-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-034446 --entrypoint /usr/bin/test -v kindnet-034446:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 15:07:14.701440  371660 oci.go:107] Successfully prepared a docker volume kindnet-034446
	I1018 15:07:14.701495  371660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:07:14.701522  371660 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 15:07:14.701594  371660 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-034446:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1018 15:07:14.776034  359679 node_ready.go:57] node "auto-034446" has "Ready":"False" status (will retry)
	W1018 15:07:17.275694  359679 node_ready.go:57] node "auto-034446" has "Ready":"False" status (will retry)
	W1018 15:07:19.276554  359679 node_ready.go:57] node "auto-034446" has "Ready":"False" status (will retry)
	W1018 15:07:16.081089  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:18.141776  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:17.617450  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	W1018 15:07:19.625758  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	I1018 15:07:20.047407  371660 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-034446:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.345747651s)
	I1018 15:07:20.047446  371660 kic.go:203] duration metric: took 5.345919503s to extract preloaded images to volume ...
	W1018 15:07:20.047581  371660 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 15:07:20.047641  371660 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 15:07:20.047698  371660 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 15:07:20.117198  371660 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-034446 --name kindnet-034446 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-034446 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-034446 --network kindnet-034446 --ip 192.168.94.2 --volume kindnet-034446:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 15:07:20.873259  371660 cli_runner.go:164] Run: docker container inspect kindnet-034446 --format={{.State.Running}}
	I1018 15:07:20.894016  371660 cli_runner.go:164] Run: docker container inspect kindnet-034446 --format={{.State.Status}}
	I1018 15:07:20.915580  371660 cli_runner.go:164] Run: docker exec kindnet-034446 stat /var/lib/dpkg/alternatives/iptables
	I1018 15:07:20.972559  371660 oci.go:144] the created container "kindnet-034446" has a running status.
	I1018 15:07:20.972596  371660 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/kindnet-034446/id_rsa...
	I1018 15:07:21.205424  371660 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-89690/.minikube/machines/kindnet-034446/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 15:07:21.246907  371660 cli_runner.go:164] Run: docker container inspect kindnet-034446 --format={{.State.Status}}
	I1018 15:07:21.268702  371660 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 15:07:21.268729  371660 kic_runner.go:114] Args: [docker exec --privileged kindnet-034446 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 15:07:21.316100  371660 cli_runner.go:164] Run: docker container inspect kindnet-034446 --format={{.State.Status}}
	I1018 15:07:21.338525  371660 machine.go:93] provisionDockerMachine start ...
	I1018 15:07:21.338626  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:21.359771  371660 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:21.360147  371660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 15:07:21.360168  371660 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:07:21.498869  371660 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-034446
	
	I1018 15:07:21.498905  371660 ubuntu.go:182] provisioning hostname "kindnet-034446"
	I1018 15:07:21.499001  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:21.516667  371660 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:21.516967  371660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 15:07:21.516987  371660 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-034446 && echo "kindnet-034446" | sudo tee /etc/hostname
	I1018 15:07:21.665958  371660 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-034446
	
	I1018 15:07:21.666077  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:21.687729  371660 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:21.688029  371660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 15:07:21.688053  371660 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-034446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-034446/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-034446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:07:21.830278  371660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:07:21.830314  371660 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 15:07:21.830340  371660 ubuntu.go:190] setting up certificates
	I1018 15:07:21.830360  371660 provision.go:84] configureAuth start
	I1018 15:07:21.830459  371660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-034446
	I1018 15:07:21.850279  371660 provision.go:143] copyHostCerts
	I1018 15:07:21.850349  371660 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem, removing ...
	I1018 15:07:21.850358  371660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:07:21.850410  371660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 15:07:21.850489  371660 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem, removing ...
	I1018 15:07:21.850497  371660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:07:21.850521  371660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 15:07:21.850576  371660 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem, removing ...
	I1018 15:07:21.850583  371660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:07:21.850605  371660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 15:07:21.850659  371660 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.kindnet-034446 san=[127.0.0.1 192.168.94.2 kindnet-034446 localhost minikube]
	I1018 15:07:22.039444  371660 provision.go:177] copyRemoteCerts
	I1018 15:07:22.039621  371660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:07:22.039684  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:22.057394  371660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/kindnet-034446/id_rsa Username:docker}
	I1018 15:07:22.154772  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:07:22.174888  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1018 15:07:22.192136  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 15:07:22.209516  371660 provision.go:87] duration metric: took 379.142168ms to configureAuth
	I1018 15:07:22.209543  371660 ubuntu.go:206] setting minikube options for container-runtime
	I1018 15:07:22.209717  371660 config.go:182] Loaded profile config "kindnet-034446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:22.209837  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:22.227416  371660 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:22.227623  371660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 15:07:22.227639  371660 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:07:22.475966  371660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:07:22.475995  371660 machine.go:96] duration metric: took 1.137446715s to provisionDockerMachine
	I1018 15:07:22.476005  371660 client.go:171] duration metric: took 8.37961583s to LocalClient.Create
	I1018 15:07:22.476024  371660 start.go:167] duration metric: took 8.379677213s to libmachine.API.Create "kindnet-034446"
	I1018 15:07:22.476031  371660 start.go:293] postStartSetup for "kindnet-034446" (driver="docker")
	I1018 15:07:22.476040  371660 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:07:22.476109  371660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:07:22.476149  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:22.493244  371660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/kindnet-034446/id_rsa Username:docker}
	I1018 15:07:22.591112  371660 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:07:22.594514  371660 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 15:07:22.594540  371660 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 15:07:22.594552  371660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/addons for local assets ...
	I1018 15:07:22.594606  371660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/files for local assets ...
	I1018 15:07:22.594676  371660 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem -> 931872.pem in /etc/ssl/certs
	I1018 15:07:22.594785  371660 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:07:22.602370  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:07:22.623276  371660 start.go:296] duration metric: took 147.22765ms for postStartSetup
	I1018 15:07:22.623653  371660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-034446
	I1018 15:07:22.641709  371660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/config.json ...
	I1018 15:07:22.642064  371660 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 15:07:22.642114  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:22.661554  371660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/kindnet-034446/id_rsa Username:docker}
	I1018 15:07:22.755222  371660 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 15:07:22.759746  371660 start.go:128] duration metric: took 8.666049457s to createHost
	I1018 15:07:22.759771  371660 start.go:83] releasing machines lock for "kindnet-034446", held for 8.666176064s
	I1018 15:07:22.759838  371660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-034446
	I1018 15:07:22.777202  371660 ssh_runner.go:195] Run: cat /version.json
	I1018 15:07:22.777305  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:22.777337  371660 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:07:22.777402  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:22.798377  371660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/kindnet-034446/id_rsa Username:docker}
	I1018 15:07:22.799823  371660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/kindnet-034446/id_rsa Username:docker}
	I1018 15:07:22.902467  371660 ssh_runner.go:195] Run: systemctl --version
	I1018 15:07:22.959807  371660 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:07:22.995063  371660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:07:22.999786  371660 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:07:22.999854  371660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:07:23.025679  371660 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 15:07:23.025708  371660 start.go:495] detecting cgroup driver to use...
	I1018 15:07:23.025743  371660 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 15:07:23.025798  371660 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:07:23.041511  371660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:07:23.054209  371660 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:07:23.054288  371660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:07:23.070759  371660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:07:23.090926  371660 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:07:23.175598  371660 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:07:23.263720  371660 docker.go:234] disabling docker service ...
	I1018 15:07:23.263787  371660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:07:23.282658  371660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:07:23.295540  371660 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:07:23.381560  371660 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:07:23.465313  371660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:07:23.478568  371660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:07:23.492560  371660 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 15:07:23.492632  371660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:23.502731  371660 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 15:07:23.502799  371660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:23.511076  371660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:23.519684  371660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:23.528239  371660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:07:23.536295  371660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:23.546252  371660 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:23.559726  371660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:23.568173  371660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:07:23.575309  371660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:07:23.583469  371660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:07:23.666297  371660 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:07:23.771950  371660 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:07:23.772024  371660 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:07:23.776411  371660 start.go:563] Will wait 60s for crictl version
	I1018 15:07:23.776467  371660 ssh_runner.go:195] Run: which crictl
	I1018 15:07:23.780881  371660 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 15:07:23.813333  371660 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 15:07:23.813424  371660 ssh_runner.go:195] Run: crio --version
	I1018 15:07:23.847150  371660 ssh_runner.go:195] Run: crio --version
	I1018 15:07:23.877843  371660 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 15:07:23.879060  371660 cli_runner.go:164] Run: docker network inspect kindnet-034446 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 15:07:21.776334  359679 node_ready.go:57] node "auto-034446" has "Ready":"False" status (will retry)
	I1018 15:07:24.275871  359679 node_ready.go:49] node "auto-034446" is "Ready"
	I1018 15:07:24.275898  359679 node_ready.go:38] duration metric: took 11.503815451s for node "auto-034446" to be "Ready" ...
	I1018 15:07:24.275928  359679 api_server.go:52] waiting for apiserver process to appear ...
	I1018 15:07:24.275981  359679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 15:07:24.289006  359679 api_server.go:72] duration metric: took 11.850113502s to wait for apiserver process to appear ...
	I1018 15:07:24.289032  359679 api_server.go:88] waiting for apiserver healthz status ...
	I1018 15:07:24.289055  359679 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	W1018 15:07:20.581710  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:23.080059  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:22.116980  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	W1018 15:07:24.121383  358344 pod_ready.go:104] pod "coredns-66bc5c9577-4b6bm" is not "Ready", error: <nil>
	I1018 15:07:26.117417  358344 pod_ready.go:94] pod "coredns-66bc5c9577-4b6bm" is "Ready"
	I1018 15:07:26.117447  358344 pod_ready.go:86] duration metric: took 31.505818539s for pod "coredns-66bc5c9577-4b6bm" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.119875  358344 pod_ready.go:83] waiting for pod "etcd-embed-certs-775590" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.124114  358344 pod_ready.go:94] pod "etcd-embed-certs-775590" is "Ready"
	I1018 15:07:26.124141  358344 pod_ready.go:86] duration metric: took 4.23626ms for pod "etcd-embed-certs-775590" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.126260  358344 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-775590" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.130034  358344 pod_ready.go:94] pod "kube-apiserver-embed-certs-775590" is "Ready"
	I1018 15:07:26.130053  358344 pod_ready.go:86] duration metric: took 3.774362ms for pod "kube-apiserver-embed-certs-775590" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.131946  358344 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-775590" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.315677  358344 pod_ready.go:94] pod "kube-controller-manager-embed-certs-775590" is "Ready"
	I1018 15:07:26.315709  358344 pod_ready.go:86] duration metric: took 183.743174ms for pod "kube-controller-manager-embed-certs-775590" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:24.294020  359679 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 15:07:24.295090  359679 api_server.go:141] control plane version: v1.34.1
	I1018 15:07:24.295112  359679 api_server.go:131] duration metric: took 6.072568ms to wait for apiserver health ...
	I1018 15:07:24.295120  359679 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 15:07:24.298983  359679 system_pods.go:59] 8 kube-system pods found
	I1018 15:07:24.299020  359679 system_pods.go:61] "coredns-66bc5c9577-j7gm7" [d175b3ed-f479-4f1d-bb1e-b91468314f7b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:07:24.299030  359679 system_pods.go:61] "etcd-auto-034446" [2a93a913-3768-4fc7-942f-3e8917104a32] Running
	I1018 15:07:24.299044  359679 system_pods.go:61] "kindnet-jhq9x" [b65fbd5f-1385-4bc2-b1ab-2903e28cecfb] Running
	I1018 15:07:24.299050  359679 system_pods.go:61] "kube-apiserver-auto-034446" [d2eccfbc-3d64-4b50-a763-d587a5b75732] Running
	I1018 15:07:24.299059  359679 system_pods.go:61] "kube-controller-manager-auto-034446" [050c4bf7-fb90-4b0d-a123-45ec8ff89907] Running
	I1018 15:07:24.299069  359679 system_pods.go:61] "kube-proxy-9xrg6" [947f8c47-d0ad-4d57-afa1-fd655c273e1c] Running
	I1018 15:07:24.299074  359679 system_pods.go:61] "kube-scheduler-auto-034446" [6e1853af-73e5-4d14-8cc9-5c9c52c4baa1] Running
	I1018 15:07:24.299090  359679 system_pods.go:61] "storage-provisioner" [4c96c4f7-24df-4d4a-95fc-4a7afcf80430] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:07:24.299110  359679 system_pods.go:74] duration metric: took 3.975007ms to wait for pod list to return data ...
	I1018 15:07:24.299124  359679 default_sa.go:34] waiting for default service account to be created ...
	I1018 15:07:24.301445  359679 default_sa.go:45] found service account: "default"
	I1018 15:07:24.301467  359679 default_sa.go:55] duration metric: took 2.336234ms for default service account to be created ...
	I1018 15:07:24.301476  359679 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 15:07:24.304133  359679 system_pods.go:86] 8 kube-system pods found
	I1018 15:07:24.304164  359679 system_pods.go:89] "coredns-66bc5c9577-j7gm7" [d175b3ed-f479-4f1d-bb1e-b91468314f7b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:07:24.304172  359679 system_pods.go:89] "etcd-auto-034446" [2a93a913-3768-4fc7-942f-3e8917104a32] Running
	I1018 15:07:24.304180  359679 system_pods.go:89] "kindnet-jhq9x" [b65fbd5f-1385-4bc2-b1ab-2903e28cecfb] Running
	I1018 15:07:24.304188  359679 system_pods.go:89] "kube-apiserver-auto-034446" [d2eccfbc-3d64-4b50-a763-d587a5b75732] Running
	I1018 15:07:24.304193  359679 system_pods.go:89] "kube-controller-manager-auto-034446" [050c4bf7-fb90-4b0d-a123-45ec8ff89907] Running
	I1018 15:07:24.304202  359679 system_pods.go:89] "kube-proxy-9xrg6" [947f8c47-d0ad-4d57-afa1-fd655c273e1c] Running
	I1018 15:07:24.304207  359679 system_pods.go:89] "kube-scheduler-auto-034446" [6e1853af-73e5-4d14-8cc9-5c9c52c4baa1] Running
	I1018 15:07:24.304216  359679 system_pods.go:89] "storage-provisioner" [4c96c4f7-24df-4d4a-95fc-4a7afcf80430] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:07:24.304240  359679 retry.go:31] will retry after 281.74581ms: missing components: kube-dns
	I1018 15:07:24.590201  359679 system_pods.go:86] 8 kube-system pods found
	I1018 15:07:24.590235  359679 system_pods.go:89] "coredns-66bc5c9577-j7gm7" [d175b3ed-f479-4f1d-bb1e-b91468314f7b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:07:24.590241  359679 system_pods.go:89] "etcd-auto-034446" [2a93a913-3768-4fc7-942f-3e8917104a32] Running
	I1018 15:07:24.590247  359679 system_pods.go:89] "kindnet-jhq9x" [b65fbd5f-1385-4bc2-b1ab-2903e28cecfb] Running
	I1018 15:07:24.590250  359679 system_pods.go:89] "kube-apiserver-auto-034446" [d2eccfbc-3d64-4b50-a763-d587a5b75732] Running
	I1018 15:07:24.590253  359679 system_pods.go:89] "kube-controller-manager-auto-034446" [050c4bf7-fb90-4b0d-a123-45ec8ff89907] Running
	I1018 15:07:24.590257  359679 system_pods.go:89] "kube-proxy-9xrg6" [947f8c47-d0ad-4d57-afa1-fd655c273e1c] Running
	I1018 15:07:24.590262  359679 system_pods.go:89] "kube-scheduler-auto-034446" [6e1853af-73e5-4d14-8cc9-5c9c52c4baa1] Running
	I1018 15:07:24.590269  359679 system_pods.go:89] "storage-provisioner" [4c96c4f7-24df-4d4a-95fc-4a7afcf80430] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:07:24.590289  359679 retry.go:31] will retry after 251.940778ms: missing components: kube-dns
	I1018 15:07:24.846109  359679 system_pods.go:86] 8 kube-system pods found
	I1018 15:07:24.846146  359679 system_pods.go:89] "coredns-66bc5c9577-j7gm7" [d175b3ed-f479-4f1d-bb1e-b91468314f7b] Running
	I1018 15:07:24.846154  359679 system_pods.go:89] "etcd-auto-034446" [2a93a913-3768-4fc7-942f-3e8917104a32] Running
	I1018 15:07:24.846159  359679 system_pods.go:89] "kindnet-jhq9x" [b65fbd5f-1385-4bc2-b1ab-2903e28cecfb] Running
	I1018 15:07:24.846164  359679 system_pods.go:89] "kube-apiserver-auto-034446" [d2eccfbc-3d64-4b50-a763-d587a5b75732] Running
	I1018 15:07:24.846169  359679 system_pods.go:89] "kube-controller-manager-auto-034446" [050c4bf7-fb90-4b0d-a123-45ec8ff89907] Running
	I1018 15:07:24.846175  359679 system_pods.go:89] "kube-proxy-9xrg6" [947f8c47-d0ad-4d57-afa1-fd655c273e1c] Running
	I1018 15:07:24.846179  359679 system_pods.go:89] "kube-scheduler-auto-034446" [6e1853af-73e5-4d14-8cc9-5c9c52c4baa1] Running
	I1018 15:07:24.846184  359679 system_pods.go:89] "storage-provisioner" [4c96c4f7-24df-4d4a-95fc-4a7afcf80430] Running
	I1018 15:07:24.846194  359679 system_pods.go:126] duration metric: took 544.711692ms to wait for k8s-apps to be running ...
	I1018 15:07:24.846204  359679 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 15:07:24.846255  359679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:07:24.859825  359679 system_svc.go:56] duration metric: took 13.610777ms WaitForService to wait for kubelet
	I1018 15:07:24.859860  359679 kubeadm.go:586] duration metric: took 12.4209713s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:07:24.859882  359679 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:07:24.863244  359679 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 15:07:24.863278  359679 node_conditions.go:123] node cpu capacity is 8
	I1018 15:07:24.863296  359679 node_conditions.go:105] duration metric: took 3.408412ms to run NodePressure ...
	I1018 15:07:24.863311  359679 start.go:241] waiting for startup goroutines ...
	I1018 15:07:24.863334  359679 start.go:246] waiting for cluster config update ...
	I1018 15:07:24.863351  359679 start.go:255] writing updated cluster config ...
	I1018 15:07:24.863700  359679 ssh_runner.go:195] Run: rm -f paused
	I1018 15:07:24.867845  359679 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:07:24.871735  359679 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j7gm7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:24.876366  359679 pod_ready.go:94] pod "coredns-66bc5c9577-j7gm7" is "Ready"
	I1018 15:07:24.876384  359679 pod_ready.go:86] duration metric: took 4.629258ms for pod "coredns-66bc5c9577-j7gm7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:24.880651  359679 pod_ready.go:83] waiting for pod "etcd-auto-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:24.884526  359679 pod_ready.go:94] pod "etcd-auto-034446" is "Ready"
	I1018 15:07:24.884547  359679 pod_ready.go:86] duration metric: took 3.875476ms for pod "etcd-auto-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:24.886422  359679 pod_ready.go:83] waiting for pod "kube-apiserver-auto-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:24.890095  359679 pod_ready.go:94] pod "kube-apiserver-auto-034446" is "Ready"
	I1018 15:07:24.890115  359679 pod_ready.go:86] duration metric: took 3.67411ms for pod "kube-apiserver-auto-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:24.891811  359679 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:25.272808  359679 pod_ready.go:94] pod "kube-controller-manager-auto-034446" is "Ready"
	I1018 15:07:25.272838  359679 pod_ready.go:86] duration metric: took 381.009525ms for pod "kube-controller-manager-auto-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:25.472997  359679 pod_ready.go:83] waiting for pod "kube-proxy-9xrg6" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:25.872552  359679 pod_ready.go:94] pod "kube-proxy-9xrg6" is "Ready"
	I1018 15:07:25.872585  359679 pod_ready.go:86] duration metric: took 399.564185ms for pod "kube-proxy-9xrg6" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.073777  359679 pod_ready.go:83] waiting for pod "kube-scheduler-auto-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.472545  359679 pod_ready.go:94] pod "kube-scheduler-auto-034446" is "Ready"
	I1018 15:07:26.472572  359679 pod_ready.go:86] duration metric: took 398.765064ms for pod "kube-scheduler-auto-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.472589  359679 pod_ready.go:40] duration metric: took 1.604708988s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:07:26.518392  359679 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:07:26.520329  359679 out.go:179] * Done! kubectl is now configured to use "auto-034446" cluster and "default" namespace by default
	I1018 15:07:26.515976  358344 pod_ready.go:83] waiting for pod "kube-proxy-clcpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:26.915670  358344 pod_ready.go:94] pod "kube-proxy-clcpk" is "Ready"
	I1018 15:07:26.915710  358344 pod_ready.go:86] duration metric: took 399.704546ms for pod "kube-proxy-clcpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:27.115958  358344 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-775590" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:27.515863  358344 pod_ready.go:94] pod "kube-scheduler-embed-certs-775590" is "Ready"
	I1018 15:07:27.515893  358344 pod_ready.go:86] duration metric: took 399.905934ms for pod "kube-scheduler-embed-certs-775590" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:27.515908  358344 pod_ready.go:40] duration metric: took 32.976617507s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:07:27.588087  358344 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:07:27.589888  358344 out.go:179] * Done! kubectl is now configured to use "embed-certs-775590" cluster and "default" namespace by default
	I1018 15:07:23.896408  371660 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 15:07:23.900765  371660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:07:23.911675  371660 kubeadm.go:883] updating cluster {Name:kindnet-034446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-034446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 15:07:23.911846  371660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:07:23.911947  371660 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:07:23.945020  371660 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:07:23.945044  371660 crio.go:433] Images already preloaded, skipping extraction
	I1018 15:07:23.945093  371660 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:07:23.972204  371660 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:07:23.972230  371660 cache_images.go:85] Images are preloaded, skipping loading
	I1018 15:07:23.972241  371660 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1018 15:07:23.972360  371660 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-034446 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-034446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1018 15:07:23.972449  371660 ssh_runner.go:195] Run: crio config
	I1018 15:07:24.019780  371660 cni.go:84] Creating CNI manager for "kindnet"
	I1018 15:07:24.019810  371660 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 15:07:24.019831  371660 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-034446 NodeName:kindnet-034446 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 15:07:24.020003  371660 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-034446"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 15:07:24.020065  371660 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 15:07:24.028770  371660 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 15:07:24.028828  371660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 15:07:24.036928  371660 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1018 15:07:24.050614  371660 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 15:07:24.066491  371660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1018 15:07:24.079414  371660 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 15:07:24.083341  371660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:07:24.093399  371660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:07:24.194491  371660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:07:24.220282  371660 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446 for IP: 192.168.94.2
	I1018 15:07:24.220306  371660 certs.go:195] generating shared ca certs ...
	I1018 15:07:24.220327  371660 certs.go:227] acquiring lock for ca certs: {Name:mk6d3b4e44856f498157352ffe8bb89d2c7a3998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:24.220495  371660 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key
	I1018 15:07:24.220556  371660 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key
	I1018 15:07:24.220571  371660 certs.go:257] generating profile certs ...
	I1018 15:07:24.220649  371660 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/client.key
	I1018 15:07:24.220673  371660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/client.crt with IP's: []
	I1018 15:07:24.442965  371660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/client.crt ...
	I1018 15:07:24.442993  371660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/client.crt: {Name:mk4954a4e27ec62317870f3520bc2e13d3dc0d01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:24.443169  371660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/client.key ...
	I1018 15:07:24.443184  371660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/client.key: {Name:mk474a6b5c93b148014a0c6e5baf85dab6bd095e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:24.443262  371660 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.key.f33f2128
	I1018 15:07:24.443277  371660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.crt.f33f2128 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1018 15:07:24.954250  371660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.crt.f33f2128 ...
	I1018 15:07:24.954278  371660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.crt.f33f2128: {Name:mkf27c872cee6095c1af96c3010a2644d05d26cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:24.954438  371660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.key.f33f2128 ...
	I1018 15:07:24.954451  371660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.key.f33f2128: {Name:mkef370b8f324961cb64b8eb1d2e4ec9286db57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:24.954560  371660 certs.go:382] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.crt.f33f2128 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.crt
	I1018 15:07:24.954674  371660 certs.go:386] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.key.f33f2128 -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.key
	I1018 15:07:24.954735  371660 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/proxy-client.key
	I1018 15:07:24.954750  371660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/proxy-client.crt with IP's: []
	I1018 15:07:25.324614  371660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/proxy-client.crt ...
	I1018 15:07:25.324644  371660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/proxy-client.crt: {Name:mk5f217dde31a4ced1bb6aaab44cb4470463885d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:25.324844  371660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/proxy-client.key ...
	I1018 15:07:25.324862  371660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/proxy-client.key: {Name:mk0f8f2fec5f9b0d4c608f512ef8d8048a5cf396 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:25.325133  371660 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem (1338 bytes)
	W1018 15:07:25.325177  371660 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187_empty.pem, impossibly tiny 0 bytes
	I1018 15:07:25.325194  371660 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 15:07:25.325220  371660 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem (1082 bytes)
	I1018 15:07:25.325248  371660 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem (1123 bytes)
	I1018 15:07:25.325274  371660 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem (1675 bytes)
	I1018 15:07:25.325332  371660 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:07:25.325897  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 15:07:25.345048  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 15:07:25.362660  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 15:07:25.381760  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 15:07:25.399926  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 15:07:25.417830  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 15:07:25.434880  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 15:07:25.452329  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kindnet-034446/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 15:07:25.470795  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem --> /usr/share/ca-certificates/93187.pem (1338 bytes)
	I1018 15:07:25.491476  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /usr/share/ca-certificates/931872.pem (1708 bytes)
	I1018 15:07:25.509337  371660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 15:07:25.527647  371660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 15:07:25.540582  371660 ssh_runner.go:195] Run: openssl version
	I1018 15:07:25.546869  371660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/931872.pem && ln -fs /usr/share/ca-certificates/931872.pem /etc/ssl/certs/931872.pem"
	I1018 15:07:25.555897  371660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/931872.pem
	I1018 15:07:25.559654  371660 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 14:26 /usr/share/ca-certificates/931872.pem
	I1018 15:07:25.559724  371660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/931872.pem
	I1018 15:07:25.596726  371660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/931872.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 15:07:25.607022  371660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 15:07:25.616789  371660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:07:25.620802  371660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:07:25.620863  371660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:07:25.655008  371660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 15:07:25.664466  371660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93187.pem && ln -fs /usr/share/ca-certificates/93187.pem /etc/ssl/certs/93187.pem"
	I1018 15:07:25.675678  371660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93187.pem
	I1018 15:07:25.679956  371660 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 14:26 /usr/share/ca-certificates/93187.pem
	I1018 15:07:25.680023  371660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93187.pem
	I1018 15:07:25.726061  371660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93187.pem /etc/ssl/certs/51391683.0"
	I1018 15:07:25.735327  371660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 15:07:25.739544  371660 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 15:07:25.739601  371660 kubeadm.go:400] StartCluster: {Name:kindnet-034446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-034446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:07:25.739666  371660 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 15:07:25.739716  371660 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 15:07:25.768810  371660 cri.go:89] found id: ""
	I1018 15:07:25.768876  371660 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 15:07:25.777836  371660 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 15:07:25.786842  371660 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 15:07:25.786902  371660 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 15:07:25.795939  371660 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 15:07:25.795962  371660 kubeadm.go:157] found existing configuration files:
	
	I1018 15:07:25.796013  371660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 15:07:25.805341  371660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 15:07:25.805400  371660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 15:07:25.813858  371660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 15:07:25.822152  371660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 15:07:25.822218  371660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 15:07:25.830422  371660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 15:07:25.838612  371660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 15:07:25.838674  371660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 15:07:25.846478  371660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 15:07:25.854432  371660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 15:07:25.854478  371660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 15:07:25.862525  371660 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 15:07:25.921475  371660 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 15:07:25.986682  371660 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1018 15:07:25.081480  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:27.584594  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:30.080648  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:32.080780  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:34.081420  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	I1018 15:07:35.948611  371660 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 15:07:35.948684  371660 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 15:07:35.948816  371660 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 15:07:35.948905  371660 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 15:07:35.948990  371660 kubeadm.go:318] OS: Linux
	I1018 15:07:35.949080  371660 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 15:07:35.949145  371660 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 15:07:35.949217  371660 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 15:07:35.949285  371660 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 15:07:35.949352  371660 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 15:07:35.949433  371660 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 15:07:35.949514  371660 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 15:07:35.949597  371660 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 15:07:35.949701  371660 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 15:07:35.949841  371660 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 15:07:35.949989  371660 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 15:07:35.950095  371660 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 15:07:35.952335  371660 out.go:252]   - Generating certificates and keys ...
	I1018 15:07:35.952420  371660 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 15:07:35.952501  371660 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 15:07:35.952591  371660 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 15:07:35.952679  371660 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 15:07:35.952754  371660 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 15:07:35.952832  371660 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 15:07:35.952893  371660 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 15:07:35.953107  371660 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [kindnet-034446 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 15:07:35.953198  371660 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 15:07:35.953374  371660 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [kindnet-034446 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 15:07:35.953442  371660 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 15:07:35.953515  371660 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 15:07:35.953557  371660 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 15:07:35.953605  371660 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 15:07:35.953651  371660 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 15:07:35.953697  371660 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 15:07:35.953745  371660 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 15:07:35.953848  371660 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 15:07:35.953899  371660 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 15:07:35.954002  371660 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 15:07:35.954091  371660 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 15:07:35.955672  371660 out.go:252]   - Booting up control plane ...
	I1018 15:07:35.955841  371660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 15:07:35.955974  371660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 15:07:35.956060  371660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 15:07:35.956197  371660 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 15:07:35.956318  371660 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 15:07:35.956496  371660 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 15:07:35.956629  371660 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 15:07:35.956693  371660 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 15:07:35.956890  371660 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 15:07:35.957056  371660 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 15:07:35.957177  371660 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.836732ms
	I1018 15:07:35.957308  371660 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 15:07:35.957421  371660 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1018 15:07:35.957555  371660 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 15:07:35.957666  371660 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 15:07:35.957749  371660 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.848799048s
	I1018 15:07:35.957818  371660 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.452252864s
	I1018 15:07:35.957964  371660 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001395984s
	I1018 15:07:35.958112  371660 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 15:07:35.958254  371660 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 15:07:35.958329  371660 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 15:07:35.958578  371660 kubeadm.go:318] [mark-control-plane] Marking the node kindnet-034446 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 15:07:35.958662  371660 kubeadm.go:318] [bootstrap-token] Using token: zc9t65.l8nr7otv2eknn5q1
	I1018 15:07:35.960548  371660 out.go:252]   - Configuring RBAC rules ...
	I1018 15:07:35.960697  371660 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 15:07:35.960815  371660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 15:07:35.961074  371660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 15:07:35.961270  371660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 15:07:35.961413  371660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 15:07:35.961506  371660 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 15:07:35.961693  371660 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 15:07:35.961756  371660 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 15:07:35.961850  371660 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 15:07:35.961860  371660 kubeadm.go:318] 
	I1018 15:07:35.961969  371660 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 15:07:35.961996  371660 kubeadm.go:318] 
	I1018 15:07:35.962092  371660 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 15:07:35.962100  371660 kubeadm.go:318] 
	I1018 15:07:35.962123  371660 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 15:07:35.962179  371660 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 15:07:35.962224  371660 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 15:07:35.962229  371660 kubeadm.go:318] 
	I1018 15:07:35.962287  371660 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 15:07:35.962298  371660 kubeadm.go:318] 
	I1018 15:07:35.962335  371660 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 15:07:35.962342  371660 kubeadm.go:318] 
	I1018 15:07:35.962393  371660 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 15:07:35.962457  371660 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 15:07:35.962518  371660 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 15:07:35.962523  371660 kubeadm.go:318] 
	I1018 15:07:35.962615  371660 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 15:07:35.962701  371660 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 15:07:35.962708  371660 kubeadm.go:318] 
	I1018 15:07:35.962814  371660 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token zc9t65.l8nr7otv2eknn5q1 \
	I1018 15:07:35.962904  371660 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9845daa2461a34dd973607f8353b114051f1b03075ff9500a74fb21865e04329 \
	I1018 15:07:35.963009  371660 kubeadm.go:318] 	--control-plane 
	I1018 15:07:35.963024  371660 kubeadm.go:318] 
	I1018 15:07:35.963115  371660 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 15:07:35.963122  371660 kubeadm.go:318] 
	I1018 15:07:35.963289  371660 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token zc9t65.l8nr7otv2eknn5q1 \
	I1018 15:07:35.963407  371660 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9845daa2461a34dd973607f8353b114051f1b03075ff9500a74fb21865e04329 
	I1018 15:07:35.963422  371660 cni.go:84] Creating CNI manager for "kindnet"
	I1018 15:07:35.965234  371660 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 15:07:35.967496  371660 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 15:07:35.972601  371660 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 15:07:35.972626  371660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 15:07:35.986768  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 15:07:36.240361  371660 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 15:07:36.240476  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:36.240555  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-034446 minikube.k8s.io/updated_at=2025_10_18T15_07_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=kindnet-034446 minikube.k8s.io/primary=true
	I1018 15:07:36.252479  371660 ops.go:34] apiserver oom_adj: -16
	I1018 15:07:36.348724  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:36.848804  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:37.348781  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:37.849119  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:38.349251  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:38.849124  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1018 15:07:36.580100  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	W1018 15:07:38.580745  366690 pod_ready.go:104] pod "coredns-66bc5c9577-dtjgd" is not "Ready", error: <nil>
	I1018 15:07:39.349128  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:39.849576  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:40.349582  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:40.849144  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:41.349514  371660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 15:07:41.420931  371660 kubeadm.go:1113] duration metric: took 5.180515097s to wait for elevateKubeSystemPrivileges
	I1018 15:07:41.420975  371660 kubeadm.go:402] duration metric: took 15.681377419s to StartCluster
	I1018 15:07:41.420999  371660 settings.go:142] acquiring lock: {Name:mk9d9d72167811b3554435e137ed17137bf54a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:41.421127  371660 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:07:41.424048  371660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/kubeconfig: {Name:mkdc6fbc9be8e57447490b30dd5bb0086611876e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:41.424391  371660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 15:07:41.424415  371660 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 15:07:41.424391  371660 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:07:41.424506  371660 addons.go:69] Setting storage-provisioner=true in profile "kindnet-034446"
	I1018 15:07:41.424524  371660 addons.go:238] Setting addon storage-provisioner=true in "kindnet-034446"
	I1018 15:07:41.424554  371660 addons.go:69] Setting default-storageclass=true in profile "kindnet-034446"
	I1018 15:07:41.424578  371660 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-034446"
	I1018 15:07:41.424655  371660 config.go:182] Loaded profile config "kindnet-034446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:41.424550  371660 host.go:66] Checking if "kindnet-034446" exists ...
	I1018 15:07:41.425091  371660 cli_runner.go:164] Run: docker container inspect kindnet-034446 --format={{.State.Status}}
	I1018 15:07:41.425896  371660 cli_runner.go:164] Run: docker container inspect kindnet-034446 --format={{.State.Status}}
	I1018 15:07:41.426756  371660 out.go:179] * Verifying Kubernetes components...
	I1018 15:07:41.428337  371660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:07:41.452034  371660 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 15:07:41.453341  371660 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 15:07:41.453363  371660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 15:07:41.453601  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:41.455616  371660 addons.go:238] Setting addon default-storageclass=true in "kindnet-034446"
	I1018 15:07:41.455665  371660 host.go:66] Checking if "kindnet-034446" exists ...
	I1018 15:07:41.456740  371660 cli_runner.go:164] Run: docker container inspect kindnet-034446 --format={{.State.Status}}
	I1018 15:07:41.485032  371660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/kindnet-034446/id_rsa Username:docker}
	I1018 15:07:41.488456  371660 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 15:07:41.488482  371660 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 15:07:41.488552  371660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034446
	I1018 15:07:41.509714  371660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/kindnet-034446/id_rsa Username:docker}
	I1018 15:07:41.531867  371660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 15:07:41.577137  371660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:07:41.604947  371660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 15:07:41.625484  371660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 15:07:41.758107  371660 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1018 15:07:41.761305  371660 node_ready.go:35] waiting up to 15m0s for node "kindnet-034446" to be "Ready" ...
	I1018 15:07:42.013975  371660 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Oct 18 15:07:14 embed-certs-775590 crio[564]: time="2025-10-18T15:07:14.972053204Z" level=info msg="Started container" PID=1734 containerID=a120a16be10f9e87dda620e5f0086ece9b7dbe4ad07f8baa796dc3721eae54cf description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g/dashboard-metrics-scraper id=a338faf4-d5e5-4ea9-a2b8-63db22eb27e2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=992f13ec69bdeaadbd96ba2b78cefaac27d1d925e6c8f2081787c3e75d7629d1
	Oct 18 15:07:15 embed-certs-775590 crio[564]: time="2025-10-18T15:07:15.035991433Z" level=info msg="Removing container: 1ece75908169b7644f2e638417387757434321c6c7d738a6f5d8344da10a9b97" id=9e9ee831-7fcf-4d58-bf15-199106b3c467 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:07:15 embed-certs-775590 crio[564]: time="2025-10-18T15:07:15.049230878Z" level=info msg="Removed container 1ece75908169b7644f2e638417387757434321c6c7d738a6f5d8344da10a9b97: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g/dashboard-metrics-scraper" id=9e9ee831-7fcf-4d58-bf15-199106b3c467 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.064210816Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=aceed0dc-3257-4ca4-b4be-a2c604da379f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.06523449Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bc66cc32-3781-47b7-9ad2-5a6fe6aad2cb name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.0664211Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=303d4c2a-3f5b-4f7f-97c7-54b8ef190ae6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.066702154Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.071396188Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.07163662Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/254ce3b8fb1b2df3db7cf4b57e42fae6ceb81e68f0474dca68d5be52cec0154e/merged/etc/passwd: no such file or directory"
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.071670395Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/254ce3b8fb1b2df3db7cf4b57e42fae6ceb81e68f0474dca68d5be52cec0154e/merged/etc/group: no such file or directory"
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.071986283Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.094511546Z" level=info msg="Created container 9c35dfe066e80a5d3e0a701c2875c46b723714dbbc466e10be5dd5abc8352ecd: kube-system/storage-provisioner/storage-provisioner" id=303d4c2a-3f5b-4f7f-97c7-54b8ef190ae6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.095218189Z" level=info msg="Starting container: 9c35dfe066e80a5d3e0a701c2875c46b723714dbbc466e10be5dd5abc8352ecd" id=4e88e21c-3409-49ac-b19c-a533d19d8f0e name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:07:25 embed-certs-775590 crio[564]: time="2025-10-18T15:07:25.097094435Z" level=info msg="Started container" PID=1748 containerID=9c35dfe066e80a5d3e0a701c2875c46b723714dbbc466e10be5dd5abc8352ecd description=kube-system/storage-provisioner/storage-provisioner id=4e88e21c-3409-49ac-b19c-a533d19d8f0e name=/runtime.v1.RuntimeService/StartContainer sandboxID=dfa52245e14ccecbf2275ba20021dcccbf42895e1508903eb6d83b95e2589857
	Oct 18 15:07:35 embed-certs-775590 crio[564]: time="2025-10-18T15:07:35.91187284Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ade260d1-0ad6-4043-af4f-87a9175a5ebd name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:35 embed-certs-775590 crio[564]: time="2025-10-18T15:07:35.912697447Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=18ad72f9-1a98-4372-995c-e64d1c1ffa5a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:35 embed-certs-775590 crio[564]: time="2025-10-18T15:07:35.913777851Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g/dashboard-metrics-scraper" id=404b8827-c3c3-45f2-91c2-529b006c61d9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:35 embed-certs-775590 crio[564]: time="2025-10-18T15:07:35.914113768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:35 embed-certs-775590 crio[564]: time="2025-10-18T15:07:35.920685313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:35 embed-certs-775590 crio[564]: time="2025-10-18T15:07:35.921394343Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:35 embed-certs-775590 crio[564]: time="2025-10-18T15:07:35.963053176Z" level=info msg="Created container cfbdaedc4f8219ee6d0c2d1a4682d21b8f3ebc0449f3966109dd5720229923a2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g/dashboard-metrics-scraper" id=404b8827-c3c3-45f2-91c2-529b006c61d9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:35 embed-certs-775590 crio[564]: time="2025-10-18T15:07:35.963750089Z" level=info msg="Starting container: cfbdaedc4f8219ee6d0c2d1a4682d21b8f3ebc0449f3966109dd5720229923a2" id=a2067306-f718-4d38-865e-1e6ebc3639da name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:07:35 embed-certs-775590 crio[564]: time="2025-10-18T15:07:35.965891409Z" level=info msg="Started container" PID=1784 containerID=cfbdaedc4f8219ee6d0c2d1a4682d21b8f3ebc0449f3966109dd5720229923a2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g/dashboard-metrics-scraper id=a2067306-f718-4d38-865e-1e6ebc3639da name=/runtime.v1.RuntimeService/StartContainer sandboxID=992f13ec69bdeaadbd96ba2b78cefaac27d1d925e6c8f2081787c3e75d7629d1
	Oct 18 15:07:36 embed-certs-775590 crio[564]: time="2025-10-18T15:07:36.099848858Z" level=info msg="Removing container: a120a16be10f9e87dda620e5f0086ece9b7dbe4ad07f8baa796dc3721eae54cf" id=6c3fb72e-ab5c-4680-93ba-5770a3c8f013 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:07:36 embed-certs-775590 crio[564]: time="2025-10-18T15:07:36.113845734Z" level=info msg="Removed container a120a16be10f9e87dda620e5f0086ece9b7dbe4ad07f8baa796dc3721eae54cf: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g/dashboard-metrics-scraper" id=6c3fb72e-ab5c-4680-93ba-5770a3c8f013 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	cfbdaedc4f821       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago       Exited              dashboard-metrics-scraper   3                   992f13ec69bde       dashboard-metrics-scraper-6ffb444bf9-txp8g   kubernetes-dashboard
	9c35dfe066e80       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   dfa52245e14cc       storage-provisioner                          kube-system
	7832e0abf4afc       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   aec2b1803dba5       kubernetes-dashboard-855c9754f9-vfwtr        kubernetes-dashboard
	e9ed17ebe9d6e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   720ce375be34c       coredns-66bc5c9577-4b6bm                     kube-system
	0d44aee5b3b14       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   5dedf3eb6ed53       busybox                                      default
	9c9aaeaf481f1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   6b94e410cee70       kindnet-nkkwg                                kube-system
	a503efb2ea938       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   dfa52245e14cc       storage-provisioner                          kube-system
	1f11860acba6b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   e2d3ed84f46f2       kube-proxy-clcpk                             kube-system
	8dbbbc5ba968b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   dcdd3715e731a       etcd-embed-certs-775590                      kube-system
	7dac5e4ff28c6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   e7a7a988f1ea8       kube-controller-manager-embed-certs-775590   kube-system
	391f2be1a0cb0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   e7aed843e24dc       kube-apiserver-embed-certs-775590            kube-system
	65178e05fb205       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   d00e70cc02496       kube-scheduler-embed-certs-775590            kube-system
	
	
	==> coredns [e9ed17ebe9d6e41129b3293acffeecd329c3a79689e63102b6194a572f14b893] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37422 - 22580 "HINFO IN 5490066793616333859.1255217850527147958. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.084629907s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-775590
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-775590
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=embed-certs-775590
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_05_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:05:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-775590
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:07:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:07:23 +0000   Sat, 18 Oct 2025 15:05:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:07:23 +0000   Sat, 18 Oct 2025 15:05:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:07:23 +0000   Sat, 18 Oct 2025 15:05:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 15:07:23 +0000   Sat, 18 Oct 2025 15:06:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-775590
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                df1f36b9-fc29-426b-bde8-96e4a3ead557
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-4b6bm                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-embed-certs-775590                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-nkkwg                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-embed-certs-775590             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-embed-certs-775590    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-clcpk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-embed-certs-775590             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-txp8g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vfwtr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node embed-certs-775590 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node embed-certs-775590 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node embed-certs-775590 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s               node-controller  Node embed-certs-775590 event: Registered Node embed-certs-775590 in Controller
	  Normal  NodeReady                94s                kubelet          Node embed-certs-775590 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 55s)  kubelet          Node embed-certs-775590 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 55s)  kubelet          Node embed-certs-775590 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 55s)  kubelet          Node embed-certs-775590 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node embed-certs-775590 event: Registered Node embed-certs-775590 in Controller
	
	
	==> dmesg <==
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [8dbbbc5ba968b1ba56a06c344a32c3c030795f38bce0c95c907aa5896a4bb7f0] <==
	{"level":"warn","ts":"2025-10-18T15:06:52.177072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.184002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.199633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.210040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.225671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.234310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.242074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.249046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.257148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.265273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.273560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.287182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.294972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.303825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.312874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.320994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.329153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.339758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.346573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.364106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.374086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.381297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.398265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.405535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:06:52.465752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60702","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:07:44 up  2:50,  0 user,  load average: 6.53, 3.82, 2.35
	Linux embed-certs-775590 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9c9aaeaf481f15d9001d08c681045b2b41d6acb97974d97e2be7e59590898211] <==
	I1018 15:06:54.393746       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 15:06:54.394073       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 15:06:54.394264       1 main.go:148] setting mtu 1500 for CNI 
	I1018 15:06:54.394291       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 15:06:54.394405       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T15:06:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 15:06:54.693653       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 15:06:54.693686       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 15:06:54.693701       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 15:06:54.693876       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 15:06:55.189279       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 15:06:55.189312       1 metrics.go:72] Registering metrics
	I1018 15:06:55.189368       1 controller.go:711] "Syncing nftables rules"
	I1018 15:07:04.694853       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 15:07:04.694945       1 main.go:301] handling current node
	I1018 15:07:14.698146       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 15:07:14.698184       1 main.go:301] handling current node
	I1018 15:07:24.693595       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 15:07:24.693637       1 main.go:301] handling current node
	I1018 15:07:34.701003       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 15:07:34.701039       1 main.go:301] handling current node
	
	
	==> kube-apiserver [391f2be1a0cb010a611fea801cf28a9d37af079421a87d50d1a13033b93f5316] <==
	I1018 15:06:52.998791       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 15:06:52.999029       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 15:06:53.000202       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 15:06:53.000698       1 aggregator.go:171] initial CRD sync complete...
	I1018 15:06:53.000963       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 15:06:53.000980       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 15:06:53.000989       1 cache.go:39] Caches are synced for autoregister controller
	I1018 15:06:53.002554       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 15:06:53.002811       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 15:06:53.011805       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 15:06:53.036935       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 15:06:53.046383       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 15:06:53.046440       1 policy_source.go:240] refreshing policies
	I1018 15:06:53.057500       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 15:06:53.314743       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 15:06:53.346365       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 15:06:53.370780       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:06:53.378967       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:06:53.385833       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 15:06:53.416611       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.102.78"}
	I1018 15:06:53.426803       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.179.170"}
	I1018 15:06:53.901246       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:06:56.715508       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 15:06:56.862836       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 15:06:56.963469       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7dac5e4ff28c655ac1e75121254546efea7aeb21f3f1842322ce82ba42dafce6] <==
	I1018 15:06:56.348867       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 15:06:56.350073       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 15:06:56.352375       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 15:06:56.354653       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 15:06:56.355789       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 15:06:56.358053       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 15:06:56.359228       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 15:06:56.359255       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 15:06:56.359304       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 15:06:56.359335       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 15:06:56.359430       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 15:06:56.359690       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 15:06:56.359759       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 15:06:56.359771       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 15:06:56.359816       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 15:06:56.360206       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 15:06:56.360219       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 15:06:56.362500       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 15:06:56.362549       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 15:06:56.364997       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 15:06:56.365016       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 15:06:56.366186       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 15:06:56.367368       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 15:06:56.375833       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 15:06:56.377177       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [1f11860acba6b353b37043c9600e22e539776e34b5ceb6d65aa1f9742fa2a461] <==
	I1018 15:06:54.319237       1 server_linux.go:53] "Using iptables proxy"
	I1018 15:06:54.388061       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 15:06:54.488449       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 15:06:54.488482       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 15:06:54.488584       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 15:06:54.508538       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 15:06:54.508597       1 server_linux.go:132] "Using iptables Proxier"
	I1018 15:06:54.515255       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 15:06:54.515724       1 server.go:527] "Version info" version="v1.34.1"
	I1018 15:06:54.515775       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:06:54.517343       1 config.go:106] "Starting endpoint slice config controller"
	I1018 15:06:54.517359       1 config.go:200] "Starting service config controller"
	I1018 15:06:54.517370       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 15:06:54.517379       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 15:06:54.517635       1 config.go:309] "Starting node config controller"
	I1018 15:06:54.517701       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 15:06:54.517715       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 15:06:54.517901       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 15:06:54.517977       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 15:06:54.617826       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 15:06:54.617847       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 15:06:54.618205       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [65178e05fb2051f87794f11a491ebb47135644c26089b48edd847c231777d3ce] <==
	I1018 15:06:50.879337       1 serving.go:386] Generated self-signed cert in-memory
	W1018 15:06:52.946297       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 15:06:52.946349       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 15:06:52.946364       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 15:06:52.946374       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 15:06:52.983834       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 15:06:52.984264       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:06:52.986896       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:06:52.986949       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:06:52.987304       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 15:06:52.987381       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 15:06:53.087677       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 15:06:57 embed-certs-775590 kubelet[721]: I1018 15:06:57.083511     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g5nr\" (UniqueName: \"kubernetes.io/projected/848bd25d-a835-42b9-b839-ed84777eb911-kube-api-access-5g5nr\") pod \"dashboard-metrics-scraper-6ffb444bf9-txp8g\" (UID: \"848bd25d-a835-42b9-b839-ed84777eb911\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g"
	Oct 18 15:07:00 embed-certs-775590 kubelet[721]: I1018 15:07:00.980199     721 scope.go:117] "RemoveContainer" containerID="097dbcf22388bf577426ae2e1cf215d02a018d3514599ede91ddcaec91f5c0cd"
	Oct 18 15:07:01 embed-certs-775590 kubelet[721]: I1018 15:07:01.985182     721 scope.go:117] "RemoveContainer" containerID="097dbcf22388bf577426ae2e1cf215d02a018d3514599ede91ddcaec91f5c0cd"
	Oct 18 15:07:01 embed-certs-775590 kubelet[721]: I1018 15:07:01.985338     721 scope.go:117] "RemoveContainer" containerID="1ece75908169b7644f2e638417387757434321c6c7d738a6f5d8344da10a9b97"
	Oct 18 15:07:01 embed-certs-775590 kubelet[721]: E1018 15:07:01.985535     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txp8g_kubernetes-dashboard(848bd25d-a835-42b9-b839-ed84777eb911)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g" podUID="848bd25d-a835-42b9-b839-ed84777eb911"
	Oct 18 15:07:02 embed-certs-775590 kubelet[721]: I1018 15:07:02.994812     721 scope.go:117] "RemoveContainer" containerID="1ece75908169b7644f2e638417387757434321c6c7d738a6f5d8344da10a9b97"
	Oct 18 15:07:02 embed-certs-775590 kubelet[721]: E1018 15:07:02.995074     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txp8g_kubernetes-dashboard(848bd25d-a835-42b9-b839-ed84777eb911)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g" podUID="848bd25d-a835-42b9-b839-ed84777eb911"
	Oct 18 15:07:04 embed-certs-775590 kubelet[721]: I1018 15:07:04.478855     721 scope.go:117] "RemoveContainer" containerID="1ece75908169b7644f2e638417387757434321c6c7d738a6f5d8344da10a9b97"
	Oct 18 15:07:04 embed-certs-775590 kubelet[721]: E1018 15:07:04.479763     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txp8g_kubernetes-dashboard(848bd25d-a835-42b9-b839-ed84777eb911)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g" podUID="848bd25d-a835-42b9-b839-ed84777eb911"
	Oct 18 15:07:06 embed-certs-775590 kubelet[721]: I1018 15:07:06.019039     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vfwtr" podStartSLOduration=2.006503487 podStartE2EDuration="10.019015563s" podCreationTimestamp="2025-10-18 15:06:56 +0000 UTC" firstStartedPulling="2025-10-18 15:06:57.278530281 +0000 UTC m=+7.471834583" lastFinishedPulling="2025-10-18 15:07:05.291042351 +0000 UTC m=+15.484346659" observedRunningTime="2025-10-18 15:07:06.018563912 +0000 UTC m=+16.211868224" watchObservedRunningTime="2025-10-18 15:07:06.019015563 +0000 UTC m=+16.212319887"
	Oct 18 15:07:14 embed-certs-775590 kubelet[721]: I1018 15:07:14.909123     721 scope.go:117] "RemoveContainer" containerID="1ece75908169b7644f2e638417387757434321c6c7d738a6f5d8344da10a9b97"
	Oct 18 15:07:15 embed-certs-775590 kubelet[721]: I1018 15:07:15.033992     721 scope.go:117] "RemoveContainer" containerID="1ece75908169b7644f2e638417387757434321c6c7d738a6f5d8344da10a9b97"
	Oct 18 15:07:15 embed-certs-775590 kubelet[721]: I1018 15:07:15.034433     721 scope.go:117] "RemoveContainer" containerID="a120a16be10f9e87dda620e5f0086ece9b7dbe4ad07f8baa796dc3721eae54cf"
	Oct 18 15:07:15 embed-certs-775590 kubelet[721]: E1018 15:07:15.034639     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txp8g_kubernetes-dashboard(848bd25d-a835-42b9-b839-ed84777eb911)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g" podUID="848bd25d-a835-42b9-b839-ed84777eb911"
	Oct 18 15:07:24 embed-certs-775590 kubelet[721]: I1018 15:07:24.479283     721 scope.go:117] "RemoveContainer" containerID="a120a16be10f9e87dda620e5f0086ece9b7dbe4ad07f8baa796dc3721eae54cf"
	Oct 18 15:07:24 embed-certs-775590 kubelet[721]: E1018 15:07:24.479497     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txp8g_kubernetes-dashboard(848bd25d-a835-42b9-b839-ed84777eb911)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g" podUID="848bd25d-a835-42b9-b839-ed84777eb911"
	Oct 18 15:07:25 embed-certs-775590 kubelet[721]: I1018 15:07:25.063693     721 scope.go:117] "RemoveContainer" containerID="a503efb2ea9381b9c5fa4f5b26e57f3c807643c204fab83d6d48c48330820b57"
	Oct 18 15:07:35 embed-certs-775590 kubelet[721]: I1018 15:07:35.911493     721 scope.go:117] "RemoveContainer" containerID="a120a16be10f9e87dda620e5f0086ece9b7dbe4ad07f8baa796dc3721eae54cf"
	Oct 18 15:07:36 embed-certs-775590 kubelet[721]: I1018 15:07:36.098331     721 scope.go:117] "RemoveContainer" containerID="a120a16be10f9e87dda620e5f0086ece9b7dbe4ad07f8baa796dc3721eae54cf"
	Oct 18 15:07:36 embed-certs-775590 kubelet[721]: I1018 15:07:36.098589     721 scope.go:117] "RemoveContainer" containerID="cfbdaedc4f8219ee6d0c2d1a4682d21b8f3ebc0449f3966109dd5720229923a2"
	Oct 18 15:07:36 embed-certs-775590 kubelet[721]: E1018 15:07:36.098802     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txp8g_kubernetes-dashboard(848bd25d-a835-42b9-b839-ed84777eb911)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txp8g" podUID="848bd25d-a835-42b9-b839-ed84777eb911"
	Oct 18 15:07:39 embed-certs-775590 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 15:07:39 embed-certs-775590 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 15:07:39 embed-certs-775590 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 15:07:39 embed-certs-775590 systemd[1]: kubelet.service: Consumed 1.725s CPU time.
	
	
	==> kubernetes-dashboard [7832e0abf4afc353da085c8c8070f3929d57ca1ce8ed56737bd8d3f1433ad26f] <==
	2025/10/18 15:07:05 Using namespace: kubernetes-dashboard
	2025/10/18 15:07:05 Using in-cluster config to connect to apiserver
	2025/10/18 15:07:05 Using secret token for csrf signing
	2025/10/18 15:07:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 15:07:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 15:07:05 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 15:07:05 Generating JWE encryption key
	2025/10/18 15:07:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 15:07:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 15:07:05 Initializing JWE encryption key from synchronized object
	2025/10/18 15:07:05 Creating in-cluster Sidecar client
	2025/10/18 15:07:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 15:07:05 Serving insecurely on HTTP port: 9090
	2025/10/18 15:07:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 15:07:05 Starting overwatch
	
	
	==> storage-provisioner [9c35dfe066e80a5d3e0a701c2875c46b723714dbbc466e10be5dd5abc8352ecd] <==
	I1018 15:07:25.110025       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 15:07:25.118812       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 15:07:25.118959       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 15:07:25.121116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:28.576778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:32.838370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:36.437759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:39.491382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:42.514603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:42.520572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 15:07:42.520796       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 15:07:42.521011       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-775590_7c83e531-f167-4419-be16-ec32ca059751!
	I1018 15:07:42.520970       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b555887a-6bab-4008-b93c-f9bed67d8ecd", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-775590_7c83e531-f167-4419-be16-ec32ca059751 became leader
	W1018 15:07:42.523526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:42.530331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 15:07:42.621682       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-775590_7c83e531-f167-4419-be16-ec32ca059751!
	W1018 15:07:44.533878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:44.538617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a503efb2ea9381b9c5fa4f5b26e57f3c807643c204fab83d6d48c48330820b57] <==
	I1018 15:06:54.284496       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 15:07:24.286311       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-775590 -n embed-certs-775590
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-775590 -n embed-certs-775590: exit status 2 (344.279794ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-775590 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-489104 --alsologtostderr -v=1
E1018 15:07:58.354687   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-489104 --alsologtostderr -v=1: exit status 80 (2.522744745s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-489104 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 15:07:57.291506  384119 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:07:57.291769  384119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:07:57.291779  384119 out.go:374] Setting ErrFile to fd 2...
	I1018 15:07:57.291783  384119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:07:57.292055  384119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:07:57.292306  384119 out.go:368] Setting JSON to false
	I1018 15:07:57.292359  384119 mustload.go:65] Loading cluster: default-k8s-diff-port-489104
	I1018 15:07:57.292717  384119 config.go:182] Loaded profile config "default-k8s-diff-port-489104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:57.293209  384119 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-489104 --format={{.State.Status}}
	I1018 15:07:57.314069  384119 host.go:66] Checking if "default-k8s-diff-port-489104" exists ...
	I1018 15:07:57.314475  384119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:07:57.382704  384119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:93 SystemTime:2025-10-18 15:07:57.370201093 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:07:57.383675  384119 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-489104 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 15:07:57.387687  384119 out.go:179] * Pausing node default-k8s-diff-port-489104 ... 
	I1018 15:07:57.389028  384119 host.go:66] Checking if "default-k8s-diff-port-489104" exists ...
	I1018 15:07:57.389391  384119 ssh_runner.go:195] Run: systemctl --version
	I1018 15:07:57.389438  384119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-489104
	I1018 15:07:57.412989  384119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/default-k8s-diff-port-489104/id_rsa Username:docker}
	I1018 15:07:57.511619  384119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:07:57.525518  384119 pause.go:52] kubelet running: true
	I1018 15:07:57.525615  384119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:07:57.679190  384119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:07:57.679273  384119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:07:57.754729  384119 cri.go:89] found id: "40928802aedeab3c2a9b90afd2b1acb3c9667f75da95b3ef38a0964e568b3ffd"
	I1018 15:07:57.754758  384119 cri.go:89] found id: "e88a1356120d204fdc4589ff5088ddfaf6f22f9da2c191956e946643ae7c3ae2"
	I1018 15:07:57.754765  384119 cri.go:89] found id: "4159ca4d468f49826a85e981582c651a18b8a39a2bddebeb07af010243f0d04f"
	I1018 15:07:57.754770  384119 cri.go:89] found id: "b87ad405fde623f73cb099b26bcf0ab11f50050837725bf409c775dfa67cde02"
	I1018 15:07:57.754774  384119 cri.go:89] found id: "09fa1b647fa4fd7599c9fa5e528e44f54acc68f2fbf3314632c5c794ff039576"
	I1018 15:07:57.754788  384119 cri.go:89] found id: "7fb5589151f3a78025f09ce1b546891fb02d25162971c57434743de3e24cbe9f"
	I1018 15:07:57.754792  384119 cri.go:89] found id: "ce9720cd32591b6942daf642e28e8696920a5b3fcb4f8eddcd689c9ef3054c1e"
	I1018 15:07:57.754796  384119 cri.go:89] found id: "1e308c368e373ccff9c4504f9e6503c09e4a7d1e0200e60472eaf38378135b96"
	I1018 15:07:57.754800  384119 cri.go:89] found id: "2358e366cd9757f3067562185021f0051cae924e07f221015b53a392bf5f90b2"
	I1018 15:07:57.754818  384119 cri.go:89] found id: "27c889175cb65cf3399ec27dae37172afc92f1f6b367c006c3c16f8a1c539efb"
	I1018 15:07:57.754828  384119 cri.go:89] found id: "e38e8320d8865a084f077cc5774404b5b2815ffd7589ad4d5844cfa0edc768c5"
	I1018 15:07:57.754832  384119 cri.go:89] found id: ""
	I1018 15:07:57.754886  384119 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:07:57.767431  384119 retry.go:31] will retry after 151.824907ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:07:57Z" level=error msg="open /run/runc: no such file or directory"
	I1018 15:07:57.919944  384119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:07:57.933965  384119 pause.go:52] kubelet running: false
	I1018 15:07:57.934044  384119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:07:58.086676  384119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:07:58.086805  384119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:07:58.163361  384119 cri.go:89] found id: "40928802aedeab3c2a9b90afd2b1acb3c9667f75da95b3ef38a0964e568b3ffd"
	I1018 15:07:58.163407  384119 cri.go:89] found id: "e88a1356120d204fdc4589ff5088ddfaf6f22f9da2c191956e946643ae7c3ae2"
	I1018 15:07:58.163416  384119 cri.go:89] found id: "4159ca4d468f49826a85e981582c651a18b8a39a2bddebeb07af010243f0d04f"
	I1018 15:07:58.163422  384119 cri.go:89] found id: "b87ad405fde623f73cb099b26bcf0ab11f50050837725bf409c775dfa67cde02"
	I1018 15:07:58.163426  384119 cri.go:89] found id: "09fa1b647fa4fd7599c9fa5e528e44f54acc68f2fbf3314632c5c794ff039576"
	I1018 15:07:58.163436  384119 cri.go:89] found id: "7fb5589151f3a78025f09ce1b546891fb02d25162971c57434743de3e24cbe9f"
	I1018 15:07:58.163441  384119 cri.go:89] found id: "ce9720cd32591b6942daf642e28e8696920a5b3fcb4f8eddcd689c9ef3054c1e"
	I1018 15:07:58.163445  384119 cri.go:89] found id: "1e308c368e373ccff9c4504f9e6503c09e4a7d1e0200e60472eaf38378135b96"
	I1018 15:07:58.163449  384119 cri.go:89] found id: "2358e366cd9757f3067562185021f0051cae924e07f221015b53a392bf5f90b2"
	I1018 15:07:58.163474  384119 cri.go:89] found id: "27c889175cb65cf3399ec27dae37172afc92f1f6b367c006c3c16f8a1c539efb"
	I1018 15:07:58.163480  384119 cri.go:89] found id: "e38e8320d8865a084f077cc5774404b5b2815ffd7589ad4d5844cfa0edc768c5"
	I1018 15:07:58.163482  384119 cri.go:89] found id: ""
	I1018 15:07:58.163533  384119 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:07:58.176208  384119 retry.go:31] will retry after 463.255875ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:07:58Z" level=error msg="open /run/runc: no such file or directory"
	I1018 15:07:58.639896  384119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:07:58.653776  384119 pause.go:52] kubelet running: false
	I1018 15:07:58.653852  384119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:07:58.817508  384119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:07:58.817588  384119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:07:58.886741  384119 cri.go:89] found id: "40928802aedeab3c2a9b90afd2b1acb3c9667f75da95b3ef38a0964e568b3ffd"
	I1018 15:07:58.886767  384119 cri.go:89] found id: "e88a1356120d204fdc4589ff5088ddfaf6f22f9da2c191956e946643ae7c3ae2"
	I1018 15:07:58.886773  384119 cri.go:89] found id: "4159ca4d468f49826a85e981582c651a18b8a39a2bddebeb07af010243f0d04f"
	I1018 15:07:58.886777  384119 cri.go:89] found id: "b87ad405fde623f73cb099b26bcf0ab11f50050837725bf409c775dfa67cde02"
	I1018 15:07:58.886790  384119 cri.go:89] found id: "09fa1b647fa4fd7599c9fa5e528e44f54acc68f2fbf3314632c5c794ff039576"
	I1018 15:07:58.886796  384119 cri.go:89] found id: "7fb5589151f3a78025f09ce1b546891fb02d25162971c57434743de3e24cbe9f"
	I1018 15:07:58.886799  384119 cri.go:89] found id: "ce9720cd32591b6942daf642e28e8696920a5b3fcb4f8eddcd689c9ef3054c1e"
	I1018 15:07:58.886804  384119 cri.go:89] found id: "1e308c368e373ccff9c4504f9e6503c09e4a7d1e0200e60472eaf38378135b96"
	I1018 15:07:58.886809  384119 cri.go:89] found id: "2358e366cd9757f3067562185021f0051cae924e07f221015b53a392bf5f90b2"
	I1018 15:07:58.886825  384119 cri.go:89] found id: "27c889175cb65cf3399ec27dae37172afc92f1f6b367c006c3c16f8a1c539efb"
	I1018 15:07:58.886831  384119 cri.go:89] found id: "e38e8320d8865a084f077cc5774404b5b2815ffd7589ad4d5844cfa0edc768c5"
	I1018 15:07:58.886836  384119 cri.go:89] found id: ""
	I1018 15:07:58.886888  384119 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:07:58.899466  384119 retry.go:31] will retry after 577.749638ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:07:58Z" level=error msg="open /run/runc: no such file or directory"
	I1018 15:07:59.478093  384119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:07:59.492051  384119 pause.go:52] kubelet running: false
	I1018 15:07:59.492123  384119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 15:07:59.647782  384119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 15:07:59.647865  384119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 15:07:59.720555  384119 cri.go:89] found id: "40928802aedeab3c2a9b90afd2b1acb3c9667f75da95b3ef38a0964e568b3ffd"
	I1018 15:07:59.720581  384119 cri.go:89] found id: "e88a1356120d204fdc4589ff5088ddfaf6f22f9da2c191956e946643ae7c3ae2"
	I1018 15:07:59.720587  384119 cri.go:89] found id: "4159ca4d468f49826a85e981582c651a18b8a39a2bddebeb07af010243f0d04f"
	I1018 15:07:59.720592  384119 cri.go:89] found id: "b87ad405fde623f73cb099b26bcf0ab11f50050837725bf409c775dfa67cde02"
	I1018 15:07:59.720596  384119 cri.go:89] found id: "09fa1b647fa4fd7599c9fa5e528e44f54acc68f2fbf3314632c5c794ff039576"
	I1018 15:07:59.720601  384119 cri.go:89] found id: "7fb5589151f3a78025f09ce1b546891fb02d25162971c57434743de3e24cbe9f"
	I1018 15:07:59.720604  384119 cri.go:89] found id: "ce9720cd32591b6942daf642e28e8696920a5b3fcb4f8eddcd689c9ef3054c1e"
	I1018 15:07:59.720607  384119 cri.go:89] found id: "1e308c368e373ccff9c4504f9e6503c09e4a7d1e0200e60472eaf38378135b96"
	I1018 15:07:59.720609  384119 cri.go:89] found id: "2358e366cd9757f3067562185021f0051cae924e07f221015b53a392bf5f90b2"
	I1018 15:07:59.720615  384119 cri.go:89] found id: "27c889175cb65cf3399ec27dae37172afc92f1f6b367c006c3c16f8a1c539efb"
	I1018 15:07:59.720618  384119 cri.go:89] found id: "e38e8320d8865a084f077cc5774404b5b2815ffd7589ad4d5844cfa0edc768c5"
	I1018 15:07:59.720620  384119 cri.go:89] found id: ""
	I1018 15:07:59.720661  384119 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 15:07:59.742018  384119 out.go:203] 
	W1018 15:07:59.744109  384119 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:07:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T15:07:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 15:07:59.744129  384119 out.go:285] * 
	* 
	W1018 15:07:59.750214  384119 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 15:07:59.752119  384119 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-489104 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-489104
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-489104:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58",
	        "Created": "2025-10-18T15:05:55.975362915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 366978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T15:07:00.170286433Z",
	            "FinishedAt": "2025-10-18T15:06:58.958035594Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58/hostname",
	        "HostsPath": "/var/lib/docker/containers/028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58/hosts",
	        "LogPath": "/var/lib/docker/containers/028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58/028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58-json.log",
	        "Name": "/default-k8s-diff-port-489104",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-489104:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-489104",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58",
	                "LowerDir": "/var/lib/docker/overlay2/d41b3b6686e5c33cdc02253cbb7729e5677ff70aca495d84cb9a64aec8ce6497-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d41b3b6686e5c33cdc02253cbb7729e5677ff70aca495d84cb9a64aec8ce6497/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d41b3b6686e5c33cdc02253cbb7729e5677ff70aca495d84cb9a64aec8ce6497/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d41b3b6686e5c33cdc02253cbb7729e5677ff70aca495d84cb9a64aec8ce6497/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-489104",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-489104/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-489104",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-489104",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-489104",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ffddfb2dd428b00958beca129d3775f125283ab017f76f6cc00e62e7306cdca",
	            "SandboxKey": "/var/run/docker/netns/3ffddfb2dd42",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-489104": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:6d:1d:26:9b:20",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc1ae438e1a0053de9cf1d93573ce1c4498bc18884eb76fa43ba91a693a5bdd8",
	                    "EndpointID": "df705a79428b690e011a56e97bb27da3502ff460cdf30d624f63c6286b6b887e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-489104",
	                        "028760fe9fe5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-489104 -n default-k8s-diff-port-489104
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-489104 -n default-k8s-diff-port-489104: exit status 2 (357.020082ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-489104 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-489104 logs -n 25: (1.255139968s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-034446 sudo cat /var/lib/kubelet/config.yaml                                                                                                               │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo systemctl status docker --all --full --no-pager                                                                                                │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ ssh     │ -p auto-034446 sudo systemctl cat docker --no-pager                                                                                                                │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ delete  │ -p embed-certs-775590                                                                                                                                              │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo cat /etc/docker/daemon.json                                                                                                                    │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ start   │ -p calico-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-034446                │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ ssh     │ -p auto-034446 sudo docker system info                                                                                                                             │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ ssh     │ -p auto-034446 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ ssh     │ -p auto-034446 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ ssh     │ -p auto-034446 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo cri-dockerd --version                                                                                                                          │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ ssh     │ -p auto-034446 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo containerd config dump                                                                                                                         │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo crio config                                                                                                                                    │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ delete  │ -p auto-034446                                                                                                                                                     │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ start   │ -p custom-flannel-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-034446        │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ image   │ default-k8s-diff-port-489104 image list --format=json                                                                                                              │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ pause   │ -p default-k8s-diff-port-489104 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:07:55
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:07:55.981529  383339 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:07:55.981824  383339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:07:55.981834  383339 out.go:374] Setting ErrFile to fd 2...
	I1018 15:07:55.981838  383339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:07:55.982066  383339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:07:55.982546  383339 out.go:368] Setting JSON to false
	I1018 15:07:55.983754  383339 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10227,"bootTime":1760789849,"procs":354,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:07:55.983847  383339 start.go:141] virtualization: kvm guest
	I1018 15:07:55.986155  383339 out.go:179] * [custom-flannel-034446] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:07:55.987602  383339 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:07:55.987604  383339 notify.go:220] Checking for updates...
	I1018 15:07:55.989022  383339 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:07:55.990469  383339 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:07:55.991682  383339 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 15:07:55.992954  383339 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:07:55.994135  383339 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:07:55.995874  383339 config.go:182] Loaded profile config "calico-034446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:55.996024  383339 config.go:182] Loaded profile config "default-k8s-diff-port-489104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:55.996169  383339 config.go:182] Loaded profile config "kindnet-034446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:55.996340  383339 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:07:56.024543  383339 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:07:56.024707  383339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:07:56.085267  383339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 15:07:56.074966611 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:07:56.085383  383339 docker.go:318] overlay module found
	I1018 15:07:56.087357  383339 out.go:179] * Using the docker driver based on user configuration
	I1018 15:07:56.088905  383339 start.go:305] selected driver: docker
	I1018 15:07:56.088938  383339 start.go:925] validating driver "docker" against <nil>
	I1018 15:07:56.088954  383339 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:07:56.089525  383339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:07:56.149791  383339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 15:07:56.140153653 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:07:56.150005  383339 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 15:07:56.150327  383339 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:07:56.152190  383339 out.go:179] * Using Docker driver with root privileges
	I1018 15:07:56.153412  383339 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1018 15:07:56.153443  383339 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1018 15:07:56.153523  383339 start.go:349] cluster config:
	{Name:custom-flannel-034446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-034446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:07:56.154815  383339 out.go:179] * Starting "custom-flannel-034446" primary control-plane node in "custom-flannel-034446" cluster
	I1018 15:07:56.156026  383339 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:07:56.157354  383339 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:07:56.158566  383339 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:07:56.158598  383339 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 15:07:56.158616  383339 cache.go:58] Caching tarball of preloaded images
	I1018 15:07:56.158664  383339 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 15:07:56.158707  383339 preload.go:233] Found /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 15:07:56.158722  383339 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 15:07:56.158824  383339 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/custom-flannel-034446/config.json ...
	I1018 15:07:56.158855  383339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/custom-flannel-034446/config.json: {Name:mke19ee04e1a98dade9dc2783a6332b77aeb5378 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:56.180642  383339 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:07:56.180661  383339 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 15:07:56.180677  383339 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:07:56.180704  383339 start.go:360] acquireMachinesLock for custom-flannel-034446: {Name:mkcc5b61b07fb57d79f02863a1d88929fc18f126 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:07:56.180816  383339 start.go:364] duration metric: took 89.323µs to acquireMachinesLock for "custom-flannel-034446"
	I1018 15:07:56.180848  383339 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-034446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-034446 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:07:56.180951  383339 start.go:125] createHost starting for "" (driver="docker")
	I1018 15:07:54.176114  371660 system_pods.go:86] 8 kube-system pods found
	I1018 15:07:54.176157  371660 system_pods.go:89] "coredns-66bc5c9577-xv4pt" [bd308514-04ee-4c8f-ae45-2156174a0b17] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:07:54.176165  371660 system_pods.go:89] "etcd-kindnet-034446" [00e31727-8712-4c46-8cd7-6c57ec04dc35] Running
	I1018 15:07:54.176173  371660 system_pods.go:89] "kindnet-8bv5s" [a86107b1-1157-4557-8c91-36297f683f57] Running
	I1018 15:07:54.176180  371660 system_pods.go:89] "kube-apiserver-kindnet-034446" [50b8de39-20a1-4e8f-bd2b-cc361d831564] Running
	I1018 15:07:54.176186  371660 system_pods.go:89] "kube-controller-manager-kindnet-034446" [8eb018da-f160-46ea-8782-422c881396cf] Running
	I1018 15:07:54.176192  371660 system_pods.go:89] "kube-proxy-cchmh" [d0964c4e-457c-4b27-837d-09afebe67c53] Running
	I1018 15:07:54.176201  371660 system_pods.go:89] "kube-scheduler-kindnet-034446" [4f95b538-7c91-46bb-b092-cfdf4971a4ec] Running
	I1018 15:07:54.176212  371660 system_pods.go:89] "storage-provisioner" [30dcc47b-3797-4dba-91b3-c61e161c132d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:07:54.176232  371660 retry.go:31] will retry after 594.007222ms: missing components: kube-dns
	I1018 15:07:54.775397  371660 system_pods.go:86] 8 kube-system pods found
	I1018 15:07:54.775438  371660 system_pods.go:89] "coredns-66bc5c9577-xv4pt" [bd308514-04ee-4c8f-ae45-2156174a0b17] Running
	I1018 15:07:54.775447  371660 system_pods.go:89] "etcd-kindnet-034446" [00e31727-8712-4c46-8cd7-6c57ec04dc35] Running
	I1018 15:07:54.775453  371660 system_pods.go:89] "kindnet-8bv5s" [a86107b1-1157-4557-8c91-36297f683f57] Running
	I1018 15:07:54.775459  371660 system_pods.go:89] "kube-apiserver-kindnet-034446" [50b8de39-20a1-4e8f-bd2b-cc361d831564] Running
	I1018 15:07:54.775466  371660 system_pods.go:89] "kube-controller-manager-kindnet-034446" [8eb018da-f160-46ea-8782-422c881396cf] Running
	I1018 15:07:54.775472  371660 system_pods.go:89] "kube-proxy-cchmh" [d0964c4e-457c-4b27-837d-09afebe67c53] Running
	I1018 15:07:54.775477  371660 system_pods.go:89] "kube-scheduler-kindnet-034446" [4f95b538-7c91-46bb-b092-cfdf4971a4ec] Running
	I1018 15:07:54.775483  371660 system_pods.go:89] "storage-provisioner" [30dcc47b-3797-4dba-91b3-c61e161c132d] Running
	I1018 15:07:54.775493  371660 system_pods.go:126] duration metric: took 1.483599668s to wait for k8s-apps to be running ...
	I1018 15:07:54.775507  371660 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 15:07:54.775560  371660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:07:54.790748  371660 system_svc.go:56] duration metric: took 15.231892ms WaitForService to wait for kubelet
	I1018 15:07:54.790775  371660 kubeadm.go:586] duration metric: took 13.366255673s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:07:54.790797  371660 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:07:54.794245  371660 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 15:07:54.794276  371660 node_conditions.go:123] node cpu capacity is 8
	I1018 15:07:54.794294  371660 node_conditions.go:105] duration metric: took 3.48618ms to run NodePressure ...
	I1018 15:07:54.794312  371660 start.go:241] waiting for startup goroutines ...
	I1018 15:07:54.794331  371660 start.go:246] waiting for cluster config update ...
	I1018 15:07:54.794349  371660 start.go:255] writing updated cluster config ...
	I1018 15:07:54.794674  371660 ssh_runner.go:195] Run: rm -f paused
	I1018 15:07:54.798881  371660 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:07:54.802596  371660 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xv4pt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:54.807121  371660 pod_ready.go:94] pod "coredns-66bc5c9577-xv4pt" is "Ready"
	I1018 15:07:54.807144  371660 pod_ready.go:86] duration metric: took 4.521724ms for pod "coredns-66bc5c9577-xv4pt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:54.809280  371660 pod_ready.go:83] waiting for pod "etcd-kindnet-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:54.813312  371660 pod_ready.go:94] pod "etcd-kindnet-034446" is "Ready"
	I1018 15:07:54.813330  371660 pod_ready.go:86] duration metric: took 4.030801ms for pod "etcd-kindnet-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:54.815433  371660 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:54.819227  371660 pod_ready.go:94] pod "kube-apiserver-kindnet-034446" is "Ready"
	I1018 15:07:54.819246  371660 pod_ready.go:86] duration metric: took 3.790957ms for pod "kube-apiserver-kindnet-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:54.821044  371660 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:55.203464  371660 pod_ready.go:94] pod "kube-controller-manager-kindnet-034446" is "Ready"
	I1018 15:07:55.203495  371660 pod_ready.go:86] duration metric: took 382.427227ms for pod "kube-controller-manager-kindnet-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:55.404011  371660 pod_ready.go:83] waiting for pod "kube-proxy-cchmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:55.803558  371660 pod_ready.go:94] pod "kube-proxy-cchmh" is "Ready"
	I1018 15:07:55.803585  371660 pod_ready.go:86] duration metric: took 399.548376ms for pod "kube-proxy-cchmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:56.003868  371660 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:56.404129  371660 pod_ready.go:94] pod "kube-scheduler-kindnet-034446" is "Ready"
	I1018 15:07:56.404153  371660 pod_ready.go:86] duration metric: took 400.256657ms for pod "kube-scheduler-kindnet-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:56.404171  371660 pod_ready.go:40] duration metric: took 1.605241151s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:07:56.458055  371660 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:07:56.461724  371660 out.go:179] * Done! kubectl is now configured to use "kindnet-034446" cluster and "default" namespace by default
	I1018 15:07:53.949396  379859 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-034446:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.712541483s)
	I1018 15:07:53.949436  379859 kic.go:203] duration metric: took 4.712724684s to extract preloaded images to volume ...
	W1018 15:07:53.949538  379859 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 15:07:53.949575  379859 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 15:07:53.949626  379859 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 15:07:54.011462  379859 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-034446 --name calico-034446 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-034446 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-034446 --network calico-034446 --ip 192.168.76.2 --volume calico-034446:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 15:07:54.292002  379859 cli_runner.go:164] Run: docker container inspect calico-034446 --format={{.State.Running}}
	I1018 15:07:54.313805  379859 cli_runner.go:164] Run: docker container inspect calico-034446 --format={{.State.Status}}
	I1018 15:07:54.332672  379859 cli_runner.go:164] Run: docker exec calico-034446 stat /var/lib/dpkg/alternatives/iptables
	I1018 15:07:54.379137  379859 oci.go:144] the created container "calico-034446" has a running status.
	I1018 15:07:54.379171  379859 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/calico-034446/id_rsa...
	I1018 15:07:54.656741  379859 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-89690/.minikube/machines/calico-034446/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 15:07:54.686956  379859 cli_runner.go:164] Run: docker container inspect calico-034446 --format={{.State.Status}}
	I1018 15:07:54.706197  379859 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 15:07:54.706221  379859 kic_runner.go:114] Args: [docker exec --privileged calico-034446 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 15:07:54.751009  379859 cli_runner.go:164] Run: docker container inspect calico-034446 --format={{.State.Status}}
	I1018 15:07:54.768649  379859 machine.go:93] provisionDockerMachine start ...
	I1018 15:07:54.768750  379859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034446
	I1018 15:07:54.789758  379859 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:54.790083  379859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 15:07:54.790110  379859 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:07:54.935178  379859 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-034446
	
	I1018 15:07:54.935230  379859 ubuntu.go:182] provisioning hostname "calico-034446"
	I1018 15:07:54.935362  379859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034446
	I1018 15:07:54.955205  379859 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:54.955479  379859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 15:07:54.955498  379859 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-034446 && echo "calico-034446" | sudo tee /etc/hostname
	I1018 15:07:55.106350  379859 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-034446
	
	I1018 15:07:55.106442  379859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034446
	I1018 15:07:55.124125  379859 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:55.124476  379859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 15:07:55.124507  379859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-034446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-034446/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-034446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:07:55.260848  379859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:07:55.260890  379859 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 15:07:55.260947  379859 ubuntu.go:190] setting up certificates
	I1018 15:07:55.260963  379859 provision.go:84] configureAuth start
	I1018 15:07:55.261019  379859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-034446
	I1018 15:07:55.282539  379859 provision.go:143] copyHostCerts
	I1018 15:07:55.282606  379859 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem, removing ...
	I1018 15:07:55.282618  379859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:07:55.282696  379859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 15:07:55.282830  379859 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem, removing ...
	I1018 15:07:55.282840  379859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:07:55.282883  379859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 15:07:55.282984  379859 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem, removing ...
	I1018 15:07:55.282996  379859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:07:55.283030  379859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 15:07:55.283139  379859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.calico-034446 san=[127.0.0.1 192.168.76.2 calico-034446 localhost minikube]
	I1018 15:07:55.603327  379859 provision.go:177] copyRemoteCerts
	I1018 15:07:55.603386  379859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:07:55.603429  379859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034446
	I1018 15:07:55.621381  379859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/calico-034446/id_rsa Username:docker}
	I1018 15:07:55.719991  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:07:55.751084  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 15:07:55.785197  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 15:07:55.804168  379859 provision.go:87] duration metric: took 543.185769ms to configureAuth
	I1018 15:07:55.804199  379859 ubuntu.go:206] setting minikube options for container-runtime
	I1018 15:07:55.804390  379859 config.go:182] Loaded profile config "calico-034446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:55.804526  379859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034446
	I1018 15:07:55.822801  379859 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:55.823078  379859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 15:07:55.823103  379859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:07:56.093143  379859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:07:56.093170  379859 machine.go:96] duration metric: took 1.324496044s to provisionDockerMachine
	I1018 15:07:56.093182  379859 client.go:171] duration metric: took 7.443015141s to LocalClient.Create
	I1018 15:07:56.093218  379859 start.go:167] duration metric: took 7.443101652s to libmachine.API.Create "calico-034446"
	I1018 15:07:56.093230  379859 start.go:293] postStartSetup for "calico-034446" (driver="docker")
	I1018 15:07:56.093243  379859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:07:56.093310  379859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:07:56.093357  379859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034446
	I1018 15:07:56.114084  379859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/calico-034446/id_rsa Username:docker}
	I1018 15:07:56.223039  379859 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:07:56.226993  379859 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 15:07:56.227026  379859 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 15:07:56.227039  379859 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/addons for local assets ...
	I1018 15:07:56.227093  379859 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/files for local assets ...
	I1018 15:07:56.227205  379859 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem -> 931872.pem in /etc/ssl/certs
	I1018 15:07:56.227327  379859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:07:56.235591  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:07:56.257248  379859 start.go:296] duration metric: took 164.000949ms for postStartSetup
	I1018 15:07:56.257645  379859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-034446
	I1018 15:07:56.279134  379859 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/config.json ...
	I1018 15:07:56.279385  379859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 15:07:56.279427  379859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034446
	I1018 15:07:56.298284  379859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/calico-034446/id_rsa Username:docker}
	I1018 15:07:56.394271  379859 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 15:07:56.400323  379859 start.go:128] duration metric: took 7.752813635s to createHost
	I1018 15:07:56.400351  379859 start.go:83] releasing machines lock for "calico-034446", held for 7.753137914s
	I1018 15:07:56.400418  379859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-034446
	I1018 15:07:56.420584  379859 ssh_runner.go:195] Run: cat /version.json
	I1018 15:07:56.420629  379859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034446
	I1018 15:07:56.420762  379859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:07:56.420850  379859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034446
	I1018 15:07:56.441390  379859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/calico-034446/id_rsa Username:docker}
	I1018 15:07:56.441626  379859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/calico-034446/id_rsa Username:docker}
	I1018 15:07:56.600430  379859 ssh_runner.go:195] Run: systemctl --version
	I1018 15:07:56.608248  379859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:07:56.653275  379859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:07:56.659291  379859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:07:56.659366  379859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:07:56.695592  379859 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 15:07:56.695621  379859 start.go:495] detecting cgroup driver to use...
	I1018 15:07:56.695656  379859 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 15:07:56.695700  379859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:07:56.726165  379859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:07:56.740648  379859 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:07:56.740720  379859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:07:56.760873  379859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:07:56.782520  379859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:07:56.872815  379859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:07:56.984159  379859 docker.go:234] disabling docker service ...
	I1018 15:07:56.984233  379859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:07:57.007713  379859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:07:57.023546  379859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:07:57.123138  379859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:07:57.216260  379859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:07:57.231284  379859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:07:57.246969  379859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 15:07:57.247030  379859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:57.258501  379859 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 15:07:57.258569  379859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:57.268565  379859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:57.279351  379859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:57.289488  379859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:07:57.298689  379859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:57.310119  379859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:57.325844  379859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:57.337802  379859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:07:57.347693  379859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:07:57.356833  379859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:07:57.453862  379859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:07:59.232969  379859 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.779065537s)
	I1018 15:07:59.233013  379859 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:07:59.233074  379859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:07:59.237705  379859 start.go:563] Will wait 60s for crictl version
	I1018 15:07:59.237771  379859 ssh_runner.go:195] Run: which crictl
	I1018 15:07:59.241764  379859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 15:07:59.268614  379859 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 15:07:59.268693  379859 ssh_runner.go:195] Run: crio --version
	I1018 15:07:59.299214  379859 ssh_runner.go:195] Run: crio --version
	I1018 15:07:59.340829  379859 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 18 15:07:32 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:32.751418566Z" level=info msg="Started container" PID=1733 containerID=046b564824188700f697eac314f7834e14846486259c62eef45f471e6c86c38e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb/dashboard-metrics-scraper id=d6ecceaf-77db-4c99-96fc-eea78c1e3630 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4fa1982da655dc27d60c5857fc0ffa49e1d5550f8f7d54eaec61de66f148f2ac
	Oct 18 15:07:32 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:32.815078733Z" level=info msg="Removing container: ee03aabf5315fee089700a9ccdda4fd506ff394bf4bc8ba057c76b45c6314c87" id=165b1552-4ab1-4f65-a9be-1bf2f0901340 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:07:32 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:32.827853374Z" level=info msg="Removed container ee03aabf5315fee089700a9ccdda4fd506ff394bf4bc8ba057c76b45c6314c87: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb/dashboard-metrics-scraper" id=165b1552-4ab1-4f65-a9be-1bf2f0901340 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.83868993Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=eaf4b023-1405-4e58-961c-05db14fa0d87 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.839838128Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b9ab206f-f837-454f-9b97-fa86687eebe6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.840865193Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=046eaaa6-cee0-4424-b428-11e381f2e88f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.84258865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.849163468Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.849391003Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5d2600f37e4862493c2e0a2d71ee544934cfb8f12d051e1a930e68719135bacb/merged/etc/passwd: no such file or directory"
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.849430037Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5d2600f37e4862493c2e0a2d71ee544934cfb8f12d051e1a930e68719135bacb/merged/etc/group: no such file or directory"
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.84976059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.88489648Z" level=info msg="Created container 40928802aedeab3c2a9b90afd2b1acb3c9667f75da95b3ef38a0964e568b3ffd: kube-system/storage-provisioner/storage-provisioner" id=046eaaa6-cee0-4424-b428-11e381f2e88f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.88563625Z" level=info msg="Starting container: 40928802aedeab3c2a9b90afd2b1acb3c9667f75da95b3ef38a0964e568b3ffd" id=60562457-b9f3-444e-9f2c-549682adb683 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.88802898Z" level=info msg="Started container" PID=1747 containerID=40928802aedeab3c2a9b90afd2b1acb3c9667f75da95b3ef38a0964e568b3ffd description=kube-system/storage-provisioner/storage-provisioner id=60562457-b9f3-444e-9f2c-549682adb683 name=/runtime.v1.RuntimeService/StartContainer sandboxID=41fbbd2ac92d2c9847e389df194a24c9d07fa7416de48c83075c678b3da65310
	Oct 18 15:07:53 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:53.697667789Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fc2f135d-14d5-4363-b70f-f7c6c63ebeff name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:53 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:53.761658332Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e945e9a5-0040-4945-8892-89e7bccac421 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:53 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:53.767362495Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb/dashboard-metrics-scraper" id=47afbe6c-ff43-4cdc-9dee-0563a7743bc1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:53 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:53.767693081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:53 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:53.857077965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:53 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:53.857817623Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:53 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:53.916719714Z" level=info msg="Created container 27c889175cb65cf3399ec27dae37172afc92f1f6b367c006c3c16f8a1c539efb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb/dashboard-metrics-scraper" id=47afbe6c-ff43-4cdc-9dee-0563a7743bc1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:53 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:53.917841692Z" level=info msg="Starting container: 27c889175cb65cf3399ec27dae37172afc92f1f6b367c006c3c16f8a1c539efb" id=f890b3a5-999c-4939-bfac-4a601b2f90cf name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:07:53 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:53.921366648Z" level=info msg="Started container" PID=1781 containerID=27c889175cb65cf3399ec27dae37172afc92f1f6b367c006c3c16f8a1c539efb description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb/dashboard-metrics-scraper id=f890b3a5-999c-4939-bfac-4a601b2f90cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=4fa1982da655dc27d60c5857fc0ffa49e1d5550f8f7d54eaec61de66f148f2ac
	Oct 18 15:07:54 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:54.87956159Z" level=info msg="Removing container: 046b564824188700f697eac314f7834e14846486259c62eef45f471e6c86c38e" id=baf15801-7063-4731-ac57-a66b1244330d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:07:54 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:54.89252571Z" level=info msg="Removed container 046b564824188700f697eac314f7834e14846486259c62eef45f471e6c86c38e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb/dashboard-metrics-scraper" id=baf15801-7063-4731-ac57-a66b1244330d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	27c889175cb65       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   4fa1982da655d       dashboard-metrics-scraper-6ffb444bf9-9nlsb             kubernetes-dashboard
	40928802aedea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   41fbbd2ac92d2       storage-provisioner                                    kube-system
	e38e8320d8865       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   a64e01e6999b3       kubernetes-dashboard-855c9754f9-7nj88                  kubernetes-dashboard
	0610acd94046a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   e0b901cc3983b       busybox                                                default
	e88a1356120d2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   fbb83c221ad56       coredns-66bc5c9577-dtjgd                               kube-system
	4159ca4d468f4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   bf0ce8a1e3b2f       kindnet-nvnw6                                          kube-system
	b87ad405fde62       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           49 seconds ago      Running             kube-proxy                  0                   d21a4793b7aa2       kube-proxy-7wbfs                                       kube-system
	09fa1b647fa4f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   41fbbd2ac92d2       storage-provisioner                                    kube-system
	7fb5589151f3a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           52 seconds ago      Running             etcd                        0                   98de4f1e6a15a       etcd-default-k8s-diff-port-489104                      kube-system
	ce9720cd32591       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           52 seconds ago      Running             kube-scheduler              0                   41ae85fcc320c       kube-scheduler-default-k8s-diff-port-489104            kube-system
	1e308c368e373       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           52 seconds ago      Running             kube-apiserver              0                   ec03206507acf       kube-apiserver-default-k8s-diff-port-489104            kube-system
	2358e366cd975       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           52 seconds ago      Running             kube-controller-manager     0                   68dc57583544a       kube-controller-manager-default-k8s-diff-port-489104   kube-system
	
	
	==> coredns [e88a1356120d204fdc4589ff5088ddfaf6f22f9da2c191956e946643ae7c3ae2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49606 - 4796 "HINFO IN 3785732568142779832.626035667130173452. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.08647486s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-489104
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-489104
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=default-k8s-diff-port-489104
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_06_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:06:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-489104
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:07:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:07:40 +0000   Sat, 18 Oct 2025 15:06:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:07:40 +0000   Sat, 18 Oct 2025 15:06:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:07:40 +0000   Sat, 18 Oct 2025 15:06:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 15:07:40 +0000   Sat, 18 Oct 2025 15:06:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-489104
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                2a8259a7-7ba4-40c3-bcf3-f004f9ae6965
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-dtjgd                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-default-k8s-diff-port-489104                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-nvnw6                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-default-k8s-diff-port-489104             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-489104    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-7wbfs                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-default-k8s-diff-port-489104             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-9nlsb              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7nj88                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node default-k8s-diff-port-489104 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node default-k8s-diff-port-489104 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node default-k8s-diff-port-489104 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s               node-controller  Node default-k8s-diff-port-489104 event: Registered Node default-k8s-diff-port-489104 in Controller
	  Normal  NodeReady                94s                kubelet          Node default-k8s-diff-port-489104 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-489104 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-489104 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-489104 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node default-k8s-diff-port-489104 event: Registered Node default-k8s-diff-port-489104 in Controller
	
	
	==> dmesg <==
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [7fb5589151f3a78025f09ce1b546891fb02d25162971c57434743de3e24cbe9f] <==
	{"level":"warn","ts":"2025-10-18T15:07:09.672110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.679603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.686908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.694953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.703841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.711163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.720669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.728424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.735450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.743043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.750578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.758148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.765575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.774178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.782458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.789275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.796871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.805419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.815049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.825847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.842353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.905475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42184","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T15:07:18.890000Z","caller":"traceutil/trace.go:172","msg":"trace[1654892729] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"118.092431ms","start":"2025-10-18T15:07:18.771883Z","end":"2025-10-18T15:07:18.889976Z","steps":["trace[1654892729] 'process raft request'  (duration: 89.614533ms)","trace[1654892729] 'compare'  (duration: 28.347448ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T15:07:19.870979Z","caller":"traceutil/trace.go:172","msg":"trace[1088078680] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"103.83802ms","start":"2025-10-18T15:07:19.767122Z","end":"2025-10-18T15:07:19.870960Z","steps":["trace[1088078680] 'process raft request'  (duration: 103.683031ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T15:07:20.002036Z","caller":"traceutil/trace.go:172","msg":"trace[1900141265] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"231.268397ms","start":"2025-10-18T15:07:19.770741Z","end":"2025-10-18T15:07:20.002009Z","steps":["trace[1900141265] 'process raft request'  (duration: 225.407062ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:08:01 up  2:50,  0 user,  load average: 5.61, 3.75, 2.36
	Linux default-k8s-diff-port-489104 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4159ca4d468f49826a85e981582c651a18b8a39a2bddebeb07af010243f0d04f] <==
	I1018 15:07:11.310507       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 15:07:11.310991       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 15:07:11.311222       1 main.go:148] setting mtu 1500 for CNI 
	I1018 15:07:11.311287       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 15:07:11.311339       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T15:07:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 15:07:11.607896       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 15:07:11.608031       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 15:07:11.608061       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 15:07:11.608449       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 15:07:12.008547       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 15:07:12.008578       1 metrics.go:72] Registering metrics
	I1018 15:07:12.008655       1 controller.go:711] "Syncing nftables rules"
	I1018 15:07:21.608297       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:07:21.608343       1 main.go:301] handling current node
	I1018 15:07:31.610044       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:07:31.610092       1 main.go:301] handling current node
	I1018 15:07:41.607898       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:07:41.608032       1 main.go:301] handling current node
	I1018 15:07:51.614035       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:07:51.614076       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1e308c368e373ccff9c4504f9e6503c09e4a7d1e0200e60472eaf38378135b96] <==
	I1018 15:07:10.431471       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 15:07:10.431530       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 15:07:10.431826       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 15:07:10.431883       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 15:07:10.431988       1 aggregator.go:171] initial CRD sync complete...
	I1018 15:07:10.432012       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 15:07:10.432019       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 15:07:10.432025       1 cache.go:39] Caches are synced for autoregister controller
	I1018 15:07:10.432175       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1018 15:07:10.440567       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 15:07:10.450092       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:07:10.453415       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 15:07:10.455346       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 15:07:10.752632       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 15:07:10.770073       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 15:07:10.802772       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 15:07:10.827750       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:07:10.838666       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:07:10.912328       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.68.150"}
	I1018 15:07:10.925596       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.156.180"}
	I1018 15:07:11.328056       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:07:14.168578       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 15:07:14.218582       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 15:07:14.218582       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 15:07:14.269006       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [2358e366cd9757f3067562185021f0051cae924e07f221015b53a392bf5f90b2] <==
	I1018 15:07:13.731829       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 15:07:13.736242       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 15:07:13.761557       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 15:07:13.764870       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 15:07:13.764926       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 15:07:13.764945       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 15:07:13.764995       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 15:07:13.765005       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 15:07:13.765015       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 15:07:13.765023       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 15:07:13.766309       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 15:07:13.768143       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 15:07:13.770089       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 15:07:13.771067       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 15:07:13.771107       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 15:07:13.773667       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 15:07:13.773795       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 15:07:13.773876       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 15:07:13.773908       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-489104"
	I1018 15:07:13.773978       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 15:07:13.774747       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 15:07:13.777043       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 15:07:13.778971       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 15:07:13.781140       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 15:07:13.792510       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b87ad405fde623f73cb099b26bcf0ab11f50050837725bf409c775dfa67cde02] <==
	I1018 15:07:11.118130       1 server_linux.go:53] "Using iptables proxy"
	I1018 15:07:11.175756       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 15:07:11.276391       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 15:07:11.276446       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1018 15:07:11.276552       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 15:07:11.296480       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 15:07:11.296539       1 server_linux.go:132] "Using iptables Proxier"
	I1018 15:07:11.302237       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 15:07:11.303190       1 server.go:527] "Version info" version="v1.34.1"
	I1018 15:07:11.303274       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:07:11.305683       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 15:07:11.305705       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 15:07:11.305748       1 config.go:200] "Starting service config controller"
	I1018 15:07:11.305753       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 15:07:11.305769       1 config.go:106] "Starting endpoint slice config controller"
	I1018 15:07:11.305774       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 15:07:11.305981       1 config.go:309] "Starting node config controller"
	I1018 15:07:11.305995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 15:07:11.306003       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 15:07:11.406483       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 15:07:11.406503       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 15:07:11.406487       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ce9720cd32591b6942daf642e28e8696920a5b3fcb4f8eddcd689c9ef3054c1e] <==
	I1018 15:07:09.312976       1 serving.go:386] Generated self-signed cert in-memory
	W1018 15:07:10.356211       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 15:07:10.356344       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 15:07:10.356362       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 15:07:10.356371       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 15:07:10.406249       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 15:07:10.406289       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:07:10.414973       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:07:10.415022       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:07:10.417026       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 15:07:10.417130       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 15:07:10.515574       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 15:07:14 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:14.395840     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1b6c4748-2e70-49d6-9351-f74aafc76edc-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-9nlsb\" (UID: \"1b6c4748-2e70-49d6-9351-f74aafc76edc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb"
	Oct 18 15:07:17 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:17.755966     718 scope.go:117] "RemoveContainer" containerID="dca9d7ac081caa4232673650ff3363664e763b4bcdb566b3879738a394e9aa73"
	Oct 18 15:07:18 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:18.761410     718 scope.go:117] "RemoveContainer" containerID="dca9d7ac081caa4232673650ff3363664e763b4bcdb566b3879738a394e9aa73"
	Oct 18 15:07:18 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:18.761651     718 scope.go:117] "RemoveContainer" containerID="ee03aabf5315fee089700a9ccdda4fd506ff394bf4bc8ba057c76b45c6314c87"
	Oct 18 15:07:18 default-k8s-diff-port-489104 kubelet[718]: E1018 15:07:18.761812     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9nlsb_kubernetes-dashboard(1b6c4748-2e70-49d6-9351-f74aafc76edc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb" podUID="1b6c4748-2e70-49d6-9351-f74aafc76edc"
	Oct 18 15:07:19 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:19.764033     718 scope.go:117] "RemoveContainer" containerID="ee03aabf5315fee089700a9ccdda4fd506ff394bf4bc8ba057c76b45c6314c87"
	Oct 18 15:07:19 default-k8s-diff-port-489104 kubelet[718]: E1018 15:07:19.764199     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9nlsb_kubernetes-dashboard(1b6c4748-2e70-49d6-9351-f74aafc76edc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb" podUID="1b6c4748-2e70-49d6-9351-f74aafc76edc"
	Oct 18 15:07:20 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:20.766789     718 scope.go:117] "RemoveContainer" containerID="ee03aabf5315fee089700a9ccdda4fd506ff394bf4bc8ba057c76b45c6314c87"
	Oct 18 15:07:20 default-k8s-diff-port-489104 kubelet[718]: E1018 15:07:20.767085     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9nlsb_kubernetes-dashboard(1b6c4748-2e70-49d6-9351-f74aafc76edc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb" podUID="1b6c4748-2e70-49d6-9351-f74aafc76edc"
	Oct 18 15:07:25 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:25.705534     718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7nj88" podStartSLOduration=4.749829879 podStartE2EDuration="11.705510409s" podCreationTimestamp="2025-10-18 15:07:14 +0000 UTC" firstStartedPulling="2025-10-18 15:07:14.673127478 +0000 UTC m=+7.094147298" lastFinishedPulling="2025-10-18 15:07:21.628807985 +0000 UTC m=+14.049827828" observedRunningTime="2025-10-18 15:07:21.787991202 +0000 UTC m=+14.209011045" watchObservedRunningTime="2025-10-18 15:07:25.705510409 +0000 UTC m=+18.126530252"
	Oct 18 15:07:32 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:32.697058     718 scope.go:117] "RemoveContainer" containerID="ee03aabf5315fee089700a9ccdda4fd506ff394bf4bc8ba057c76b45c6314c87"
	Oct 18 15:07:32 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:32.811730     718 scope.go:117] "RemoveContainer" containerID="ee03aabf5315fee089700a9ccdda4fd506ff394bf4bc8ba057c76b45c6314c87"
	Oct 18 15:07:32 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:32.811983     718 scope.go:117] "RemoveContainer" containerID="046b564824188700f697eac314f7834e14846486259c62eef45f471e6c86c38e"
	Oct 18 15:07:32 default-k8s-diff-port-489104 kubelet[718]: E1018 15:07:32.812166     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9nlsb_kubernetes-dashboard(1b6c4748-2e70-49d6-9351-f74aafc76edc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb" podUID="1b6c4748-2e70-49d6-9351-f74aafc76edc"
	Oct 18 15:07:40 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:40.291410     718 scope.go:117] "RemoveContainer" containerID="046b564824188700f697eac314f7834e14846486259c62eef45f471e6c86c38e"
	Oct 18 15:07:40 default-k8s-diff-port-489104 kubelet[718]: E1018 15:07:40.291641     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9nlsb_kubernetes-dashboard(1b6c4748-2e70-49d6-9351-f74aafc76edc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb" podUID="1b6c4748-2e70-49d6-9351-f74aafc76edc"
	Oct 18 15:07:41 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:41.838257     718 scope.go:117] "RemoveContainer" containerID="09fa1b647fa4fd7599c9fa5e528e44f54acc68f2fbf3314632c5c794ff039576"
	Oct 18 15:07:53 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:53.697137     718 scope.go:117] "RemoveContainer" containerID="046b564824188700f697eac314f7834e14846486259c62eef45f471e6c86c38e"
	Oct 18 15:07:54 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:54.877241     718 scope.go:117] "RemoveContainer" containerID="046b564824188700f697eac314f7834e14846486259c62eef45f471e6c86c38e"
	Oct 18 15:07:54 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:54.877514     718 scope.go:117] "RemoveContainer" containerID="27c889175cb65cf3399ec27dae37172afc92f1f6b367c006c3c16f8a1c539efb"
	Oct 18 15:07:54 default-k8s-diff-port-489104 kubelet[718]: E1018 15:07:54.878657     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9nlsb_kubernetes-dashboard(1b6c4748-2e70-49d6-9351-f74aafc76edc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb" podUID="1b6c4748-2e70-49d6-9351-f74aafc76edc"
	Oct 18 15:07:57 default-k8s-diff-port-489104 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 15:07:57 default-k8s-diff-port-489104 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 15:07:57 default-k8s-diff-port-489104 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 15:07:57 default-k8s-diff-port-489104 systemd[1]: kubelet.service: Consumed 1.699s CPU time.
	
	
	==> kubernetes-dashboard [e38e8320d8865a084f077cc5774404b5b2815ffd7589ad4d5844cfa0edc768c5] <==
	2025/10/18 15:07:21 Using namespace: kubernetes-dashboard
	2025/10/18 15:07:21 Using in-cluster config to connect to apiserver
	2025/10/18 15:07:21 Using secret token for csrf signing
	2025/10/18 15:07:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 15:07:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 15:07:21 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 15:07:21 Generating JWE encryption key
	2025/10/18 15:07:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 15:07:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 15:07:21 Initializing JWE encryption key from synchronized object
	2025/10/18 15:07:21 Creating in-cluster Sidecar client
	2025/10/18 15:07:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 15:07:21 Serving insecurely on HTTP port: 9090
	2025/10/18 15:07:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 15:07:21 Starting overwatch
	
	
	==> storage-provisioner [09fa1b647fa4fd7599c9fa5e528e44f54acc68f2fbf3314632c5c794ff039576] <==
	I1018 15:07:11.082635       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 15:07:41.086373       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [40928802aedeab3c2a9b90afd2b1acb3c9667f75da95b3ef38a0964e568b3ffd] <==
	I1018 15:07:41.902555       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 15:07:41.911748       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 15:07:41.911794       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 15:07:41.914361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:45.370572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:49.631993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:53.231384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:56.285172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:59.308482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:59.335900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 15:07:59.336057       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 15:07:59.336210       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-489104_d7c07fb2-c394-439a-95ba-5f6ceb0d640f!
	I1018 15:07:59.336191       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b77dfb48-26a4-4c5e-9880-c5c307861880", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-489104_d7c07fb2-c394-439a-95ba-5f6ceb0d640f became leader
	W1018 15:07:59.338450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:59.341974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 15:07:59.436995       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-489104_d7c07fb2-c394-439a-95ba-5f6ceb0d640f!
	W1018 15:08:01.346103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:08:01.352663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-489104 -n default-k8s-diff-port-489104
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-489104 -n default-k8s-diff-port-489104: exit status 2 (352.206528ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-489104 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-489104
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-489104:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58",
	        "Created": "2025-10-18T15:05:55.975362915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 366978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T15:07:00.170286433Z",
	            "FinishedAt": "2025-10-18T15:06:58.958035594Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58/hostname",
	        "HostsPath": "/var/lib/docker/containers/028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58/hosts",
	        "LogPath": "/var/lib/docker/containers/028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58/028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58-json.log",
	        "Name": "/default-k8s-diff-port-489104",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-489104:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-489104",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "028760fe9fe5f99d9c049fe5160b2d3e2ddf7c1224927945c7e19259ab229b58",
	                "LowerDir": "/var/lib/docker/overlay2/d41b3b6686e5c33cdc02253cbb7729e5677ff70aca495d84cb9a64aec8ce6497-init/diff:/var/lib/docker/overlay2/ab80d605fffb7aa1e171149be353102d2c96f2bb0dc78ec830ba0d6a6f7f1359/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d41b3b6686e5c33cdc02253cbb7729e5677ff70aca495d84cb9a64aec8ce6497/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d41b3b6686e5c33cdc02253cbb7729e5677ff70aca495d84cb9a64aec8ce6497/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d41b3b6686e5c33cdc02253cbb7729e5677ff70aca495d84cb9a64aec8ce6497/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-489104",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-489104/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-489104",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-489104",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-489104",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ffddfb2dd428b00958beca129d3775f125283ab017f76f6cc00e62e7306cdca",
	            "SandboxKey": "/var/run/docker/netns/3ffddfb2dd42",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-489104": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:6d:1d:26:9b:20",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc1ae438e1a0053de9cf1d93573ce1c4498bc18884eb76fa43ba91a693a5bdd8",
	                    "EndpointID": "df705a79428b690e011a56e97bb27da3502ff460cdf30d624f63c6286b6b887e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-489104",
	                        "028760fe9fe5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-489104 -n default-k8s-diff-port-489104
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-489104 -n default-k8s-diff-port-489104: exit status 2 (335.903958ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-489104 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-489104 logs -n 25: (1.298086063s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-034446 sudo cat /var/lib/kubelet/config.yaml                                                                                                               │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo systemctl status docker --all --full --no-pager                                                                                                │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ ssh     │ -p auto-034446 sudo systemctl cat docker --no-pager                                                                                                                │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ delete  │ -p embed-certs-775590                                                                                                                                              │ embed-certs-775590           │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo cat /etc/docker/daemon.json                                                                                                                    │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ start   │ -p calico-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-034446                │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ ssh     │ -p auto-034446 sudo docker system info                                                                                                                             │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ ssh     │ -p auto-034446 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ ssh     │ -p auto-034446 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ ssh     │ -p auto-034446 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo cri-dockerd --version                                                                                                                          │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ ssh     │ -p auto-034446 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo containerd config dump                                                                                                                         │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ ssh     │ -p auto-034446 sudo crio config                                                                                                                                    │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ delete  │ -p auto-034446                                                                                                                                                     │ auto-034446                  │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ start   │ -p custom-flannel-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-034446        │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	│ image   │ default-k8s-diff-port-489104 image list --format=json                                                                                                              │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │ 18 Oct 25 15:07 UTC │
	│ pause   │ -p default-k8s-diff-port-489104 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-489104 │ jenkins │ v1.37.0 │ 18 Oct 25 15:07 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 15:07:55
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 15:07:55.981529  383339 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:07:55.981824  383339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:07:55.981834  383339 out.go:374] Setting ErrFile to fd 2...
	I1018 15:07:55.981838  383339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:07:55.982066  383339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:07:55.982546  383339 out.go:368] Setting JSON to false
	I1018 15:07:55.983754  383339 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10227,"bootTime":1760789849,"procs":354,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:07:55.983847  383339 start.go:141] virtualization: kvm guest
	I1018 15:07:55.986155  383339 out.go:179] * [custom-flannel-034446] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:07:55.987602  383339 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:07:55.987604  383339 notify.go:220] Checking for updates...
	I1018 15:07:55.989022  383339 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:07:55.990469  383339 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:07:55.991682  383339 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 15:07:55.992954  383339 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:07:55.994135  383339 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:07:55.995874  383339 config.go:182] Loaded profile config "calico-034446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:55.996024  383339 config.go:182] Loaded profile config "default-k8s-diff-port-489104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:55.996169  383339 config.go:182] Loaded profile config "kindnet-034446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:55.996340  383339 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:07:56.024543  383339 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:07:56.024707  383339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:07:56.085267  383339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 15:07:56.074966611 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:07:56.085383  383339 docker.go:318] overlay module found
	I1018 15:07:56.087357  383339 out.go:179] * Using the docker driver based on user configuration
	I1018 15:07:56.088905  383339 start.go:305] selected driver: docker
	I1018 15:07:56.088938  383339 start.go:925] validating driver "docker" against <nil>
	I1018 15:07:56.088954  383339 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:07:56.089525  383339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:07:56.149791  383339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 15:07:56.140153653 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:07:56.150005  383339 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 15:07:56.150327  383339 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:07:56.152190  383339 out.go:179] * Using Docker driver with root privileges
	I1018 15:07:56.153412  383339 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1018 15:07:56.153443  383339 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1018 15:07:56.153523  383339 start.go:349] cluster config:
	{Name:custom-flannel-034446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-034446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:07:56.154815  383339 out.go:179] * Starting "custom-flannel-034446" primary control-plane node in "custom-flannel-034446" cluster
	I1018 15:07:56.156026  383339 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 15:07:56.157354  383339 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 15:07:56.158566  383339 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:07:56.158598  383339 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 15:07:56.158616  383339 cache.go:58] Caching tarball of preloaded images
	I1018 15:07:56.158664  383339 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 15:07:56.158707  383339 preload.go:233] Found /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 15:07:56.158722  383339 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 15:07:56.158824  383339 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/custom-flannel-034446/config.json ...
	I1018 15:07:56.158855  383339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/custom-flannel-034446/config.json: {Name:mke19ee04e1a98dade9dc2783a6332b77aeb5378 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:56.180642  383339 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 15:07:56.180661  383339 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 15:07:56.180677  383339 cache.go:232] Successfully downloaded all kic artifacts
	I1018 15:07:56.180704  383339 start.go:360] acquireMachinesLock for custom-flannel-034446: {Name:mkcc5b61b07fb57d79f02863a1d88929fc18f126 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 15:07:56.180816  383339 start.go:364] duration metric: took 89.323µs to acquireMachinesLock for "custom-flannel-034446"
	I1018 15:07:56.180848  383339 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-034446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-034446 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 15:07:56.180951  383339 start.go:125] createHost starting for "" (driver="docker")
	I1018 15:07:54.176114  371660 system_pods.go:86] 8 kube-system pods found
	I1018 15:07:54.176157  371660 system_pods.go:89] "coredns-66bc5c9577-xv4pt" [bd308514-04ee-4c8f-ae45-2156174a0b17] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 15:07:54.176165  371660 system_pods.go:89] "etcd-kindnet-034446" [00e31727-8712-4c46-8cd7-6c57ec04dc35] Running
	I1018 15:07:54.176173  371660 system_pods.go:89] "kindnet-8bv5s" [a86107b1-1157-4557-8c91-36297f683f57] Running
	I1018 15:07:54.176180  371660 system_pods.go:89] "kube-apiserver-kindnet-034446" [50b8de39-20a1-4e8f-bd2b-cc361d831564] Running
	I1018 15:07:54.176186  371660 system_pods.go:89] "kube-controller-manager-kindnet-034446" [8eb018da-f160-46ea-8782-422c881396cf] Running
	I1018 15:07:54.176192  371660 system_pods.go:89] "kube-proxy-cchmh" [d0964c4e-457c-4b27-837d-09afebe67c53] Running
	I1018 15:07:54.176201  371660 system_pods.go:89] "kube-scheduler-kindnet-034446" [4f95b538-7c91-46bb-b092-cfdf4971a4ec] Running
	I1018 15:07:54.176212  371660 system_pods.go:89] "storage-provisioner" [30dcc47b-3797-4dba-91b3-c61e161c132d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 15:07:54.176232  371660 retry.go:31] will retry after 594.007222ms: missing components: kube-dns
	I1018 15:07:54.775397  371660 system_pods.go:86] 8 kube-system pods found
	I1018 15:07:54.775438  371660 system_pods.go:89] "coredns-66bc5c9577-xv4pt" [bd308514-04ee-4c8f-ae45-2156174a0b17] Running
	I1018 15:07:54.775447  371660 system_pods.go:89] "etcd-kindnet-034446" [00e31727-8712-4c46-8cd7-6c57ec04dc35] Running
	I1018 15:07:54.775453  371660 system_pods.go:89] "kindnet-8bv5s" [a86107b1-1157-4557-8c91-36297f683f57] Running
	I1018 15:07:54.775459  371660 system_pods.go:89] "kube-apiserver-kindnet-034446" [50b8de39-20a1-4e8f-bd2b-cc361d831564] Running
	I1018 15:07:54.775466  371660 system_pods.go:89] "kube-controller-manager-kindnet-034446" [8eb018da-f160-46ea-8782-422c881396cf] Running
	I1018 15:07:54.775472  371660 system_pods.go:89] "kube-proxy-cchmh" [d0964c4e-457c-4b27-837d-09afebe67c53] Running
	I1018 15:07:54.775477  371660 system_pods.go:89] "kube-scheduler-kindnet-034446" [4f95b538-7c91-46bb-b092-cfdf4971a4ec] Running
	I1018 15:07:54.775483  371660 system_pods.go:89] "storage-provisioner" [30dcc47b-3797-4dba-91b3-c61e161c132d] Running
	I1018 15:07:54.775493  371660 system_pods.go:126] duration metric: took 1.483599668s to wait for k8s-apps to be running ...
	I1018 15:07:54.775507  371660 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 15:07:54.775560  371660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 15:07:54.790748  371660 system_svc.go:56] duration metric: took 15.231892ms WaitForService to wait for kubelet
	I1018 15:07:54.790775  371660 kubeadm.go:586] duration metric: took 13.366255673s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 15:07:54.790797  371660 node_conditions.go:102] verifying NodePressure condition ...
	I1018 15:07:54.794245  371660 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 15:07:54.794276  371660 node_conditions.go:123] node cpu capacity is 8
	I1018 15:07:54.794294  371660 node_conditions.go:105] duration metric: took 3.48618ms to run NodePressure ...
	I1018 15:07:54.794312  371660 start.go:241] waiting for startup goroutines ...
	I1018 15:07:54.794331  371660 start.go:246] waiting for cluster config update ...
	I1018 15:07:54.794349  371660 start.go:255] writing updated cluster config ...
	I1018 15:07:54.794674  371660 ssh_runner.go:195] Run: rm -f paused
	I1018 15:07:54.798881  371660 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:07:54.802596  371660 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xv4pt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:54.807121  371660 pod_ready.go:94] pod "coredns-66bc5c9577-xv4pt" is "Ready"
	I1018 15:07:54.807144  371660 pod_ready.go:86] duration metric: took 4.521724ms for pod "coredns-66bc5c9577-xv4pt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:54.809280  371660 pod_ready.go:83] waiting for pod "etcd-kindnet-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:54.813312  371660 pod_ready.go:94] pod "etcd-kindnet-034446" is "Ready"
	I1018 15:07:54.813330  371660 pod_ready.go:86] duration metric: took 4.030801ms for pod "etcd-kindnet-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:54.815433  371660 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:54.819227  371660 pod_ready.go:94] pod "kube-apiserver-kindnet-034446" is "Ready"
	I1018 15:07:54.819246  371660 pod_ready.go:86] duration metric: took 3.790957ms for pod "kube-apiserver-kindnet-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:54.821044  371660 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:55.203464  371660 pod_ready.go:94] pod "kube-controller-manager-kindnet-034446" is "Ready"
	I1018 15:07:55.203495  371660 pod_ready.go:86] duration metric: took 382.427227ms for pod "kube-controller-manager-kindnet-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:55.404011  371660 pod_ready.go:83] waiting for pod "kube-proxy-cchmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:55.803558  371660 pod_ready.go:94] pod "kube-proxy-cchmh" is "Ready"
	I1018 15:07:55.803585  371660 pod_ready.go:86] duration metric: took 399.548376ms for pod "kube-proxy-cchmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:56.003868  371660 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:56.404129  371660 pod_ready.go:94] pod "kube-scheduler-kindnet-034446" is "Ready"
	I1018 15:07:56.404153  371660 pod_ready.go:86] duration metric: took 400.256657ms for pod "kube-scheduler-kindnet-034446" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 15:07:56.404171  371660 pod_ready.go:40] duration metric: took 1.605241151s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 15:07:56.458055  371660 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 15:07:56.461724  371660 out.go:179] * Done! kubectl is now configured to use "kindnet-034446" cluster and "default" namespace by default
	I1018 15:07:53.949396  379859 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-034446:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.712541483s)
	I1018 15:07:53.949436  379859 kic.go:203] duration metric: took 4.712724684s to extract preloaded images to volume ...
	W1018 15:07:53.949538  379859 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 15:07:53.949575  379859 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 15:07:53.949626  379859 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 15:07:54.011462  379859 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-034446 --name calico-034446 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-034446 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-034446 --network calico-034446 --ip 192.168.76.2 --volume calico-034446:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 15:07:54.292002  379859 cli_runner.go:164] Run: docker container inspect calico-034446 --format={{.State.Running}}
	I1018 15:07:54.313805  379859 cli_runner.go:164] Run: docker container inspect calico-034446 --format={{.State.Status}}
	I1018 15:07:54.332672  379859 cli_runner.go:164] Run: docker exec calico-034446 stat /var/lib/dpkg/alternatives/iptables
	I1018 15:07:54.379137  379859 oci.go:144] the created container "calico-034446" has a running status.
	I1018 15:07:54.379171  379859 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/calico-034446/id_rsa...
	I1018 15:07:54.656741  379859 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-89690/.minikube/machines/calico-034446/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 15:07:54.686956  379859 cli_runner.go:164] Run: docker container inspect calico-034446 --format={{.State.Status}}
	I1018 15:07:54.706197  379859 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 15:07:54.706221  379859 kic_runner.go:114] Args: [docker exec --privileged calico-034446 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 15:07:54.751009  379859 cli_runner.go:164] Run: docker container inspect calico-034446 --format={{.State.Status}}
	I1018 15:07:54.768649  379859 machine.go:93] provisionDockerMachine start ...
	I1018 15:07:54.768750  379859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034446
	I1018 15:07:54.789758  379859 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:54.790083  379859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 15:07:54.790110  379859 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:07:54.935178  379859 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-034446
	
	I1018 15:07:54.935230  379859 ubuntu.go:182] provisioning hostname "calico-034446"
	I1018 15:07:54.935362  379859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034446
	I1018 15:07:54.955205  379859 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:54.955479  379859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 15:07:54.955498  379859 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-034446 && echo "calico-034446" | sudo tee /etc/hostname
	I1018 15:07:55.106350  379859 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-034446
	
	I1018 15:07:55.106442  379859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034446
	I1018 15:07:55.124125  379859 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:55.124476  379859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 15:07:55.124507  379859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-034446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-034446/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-034446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:07:55.260848  379859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 15:07:55.260890  379859 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-89690/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-89690/.minikube}
	I1018 15:07:55.260947  379859 ubuntu.go:190] setting up certificates
	I1018 15:07:55.260963  379859 provision.go:84] configureAuth start
	I1018 15:07:55.261019  379859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-034446
	I1018 15:07:55.282539  379859 provision.go:143] copyHostCerts
	I1018 15:07:55.282606  379859 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem, removing ...
	I1018 15:07:55.282618  379859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem
	I1018 15:07:55.282696  379859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/ca.pem (1082 bytes)
	I1018 15:07:55.282830  379859 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem, removing ...
	I1018 15:07:55.282840  379859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem
	I1018 15:07:55.282883  379859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/cert.pem (1123 bytes)
	I1018 15:07:55.282984  379859 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem, removing ...
	I1018 15:07:55.282996  379859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem
	I1018 15:07:55.283030  379859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-89690/.minikube/key.pem (1675 bytes)
	I1018 15:07:55.283139  379859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem org=jenkins.calico-034446 san=[127.0.0.1 192.168.76.2 calico-034446 localhost minikube]
	I1018 15:07:55.603327  379859 provision.go:177] copyRemoteCerts
	I1018 15:07:55.603386  379859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 15:07:55.603429  379859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034446
	I1018 15:07:55.621381  379859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/calico-034446/id_rsa Username:docker}
	I1018 15:07:55.719991  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 15:07:55.751084  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 15:07:55.785197  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 15:07:55.804168  379859 provision.go:87] duration metric: took 543.185769ms to configureAuth
	I1018 15:07:55.804199  379859 ubuntu.go:206] setting minikube options for container-runtime
	I1018 15:07:55.804390  379859 config.go:182] Loaded profile config "calico-034446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:07:55.804526  379859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034446
	I1018 15:07:55.822801  379859 main.go:141] libmachine: Using SSH client type: native
	I1018 15:07:55.823078  379859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 15:07:55.823103  379859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 15:07:56.093143  379859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 15:07:56.093170  379859 machine.go:96] duration metric: took 1.324496044s to provisionDockerMachine
	I1018 15:07:56.093182  379859 client.go:171] duration metric: took 7.443015141s to LocalClient.Create
	I1018 15:07:56.093218  379859 start.go:167] duration metric: took 7.443101652s to libmachine.API.Create "calico-034446"
	I1018 15:07:56.093230  379859 start.go:293] postStartSetup for "calico-034446" (driver="docker")
	I1018 15:07:56.093243  379859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 15:07:56.093310  379859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 15:07:56.093357  379859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034446
	I1018 15:07:56.114084  379859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/calico-034446/id_rsa Username:docker}
	I1018 15:07:56.223039  379859 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 15:07:56.226993  379859 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 15:07:56.227026  379859 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 15:07:56.227039  379859 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/addons for local assets ...
	I1018 15:07:56.227093  379859 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-89690/.minikube/files for local assets ...
	I1018 15:07:56.227205  379859 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem -> 931872.pem in /etc/ssl/certs
	I1018 15:07:56.227327  379859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 15:07:56.235591  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:07:56.257248  379859 start.go:296] duration metric: took 164.000949ms for postStartSetup
	I1018 15:07:56.257645  379859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-034446
	I1018 15:07:56.279134  379859 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/config.json ...
	I1018 15:07:56.279385  379859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 15:07:56.279427  379859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034446
	I1018 15:07:56.298284  379859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/calico-034446/id_rsa Username:docker}
	I1018 15:07:56.394271  379859 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 15:07:56.400323  379859 start.go:128] duration metric: took 7.752813635s to createHost
	I1018 15:07:56.400351  379859 start.go:83] releasing machines lock for "calico-034446", held for 7.753137914s
	I1018 15:07:56.400418  379859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-034446
	I1018 15:07:56.420584  379859 ssh_runner.go:195] Run: cat /version.json
	I1018 15:07:56.420629  379859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034446
	I1018 15:07:56.420762  379859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 15:07:56.420850  379859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034446
	I1018 15:07:56.441390  379859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/calico-034446/id_rsa Username:docker}
	I1018 15:07:56.441626  379859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/calico-034446/id_rsa Username:docker}
	I1018 15:07:56.600430  379859 ssh_runner.go:195] Run: systemctl --version
	I1018 15:07:56.608248  379859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 15:07:56.653275  379859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 15:07:56.659291  379859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 15:07:56.659366  379859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 15:07:56.695592  379859 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 15:07:56.695621  379859 start.go:495] detecting cgroup driver to use...
	I1018 15:07:56.695656  379859 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 15:07:56.695700  379859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 15:07:56.726165  379859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 15:07:56.740648  379859 docker.go:218] disabling cri-docker service (if available) ...
	I1018 15:07:56.740720  379859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 15:07:56.760873  379859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 15:07:56.782520  379859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 15:07:56.872815  379859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 15:07:56.984159  379859 docker.go:234] disabling docker service ...
	I1018 15:07:56.984233  379859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 15:07:57.007713  379859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 15:07:57.023546  379859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 15:07:57.123138  379859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 15:07:57.216260  379859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 15:07:57.231284  379859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 15:07:57.246969  379859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 15:07:57.247030  379859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:57.258501  379859 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 15:07:57.258569  379859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:57.268565  379859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:57.279351  379859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:57.289488  379859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 15:07:57.298689  379859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:57.310119  379859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:57.325844  379859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 15:07:57.337802  379859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 15:07:57.347693  379859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 15:07:57.356833  379859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:07:57.453862  379859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 15:07:59.232969  379859 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.779065537s)
	I1018 15:07:59.233013  379859 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 15:07:59.233074  379859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 15:07:59.237705  379859 start.go:563] Will wait 60s for crictl version
	I1018 15:07:59.237771  379859 ssh_runner.go:195] Run: which crictl
	I1018 15:07:59.241764  379859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 15:07:59.268614  379859 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 15:07:59.268693  379859 ssh_runner.go:195] Run: crio --version
	I1018 15:07:59.299214  379859 ssh_runner.go:195] Run: crio --version
	I1018 15:07:59.340829  379859 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 15:07:56.183877  383339 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 15:07:56.184143  383339 start.go:159] libmachine.API.Create for "custom-flannel-034446" (driver="docker")
	I1018 15:07:56.184177  383339 client.go:168] LocalClient.Create starting
	I1018 15:07:56.184246  383339 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem
	I1018 15:07:56.184296  383339 main.go:141] libmachine: Decoding PEM data...
	I1018 15:07:56.184318  383339 main.go:141] libmachine: Parsing certificate...
	I1018 15:07:56.184387  383339 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem
	I1018 15:07:56.184414  383339 main.go:141] libmachine: Decoding PEM data...
	I1018 15:07:56.184432  383339 main.go:141] libmachine: Parsing certificate...
	I1018 15:07:56.184828  383339 cli_runner.go:164] Run: docker network inspect custom-flannel-034446 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 15:07:56.203026  383339 cli_runner.go:211] docker network inspect custom-flannel-034446 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 15:07:56.203119  383339 network_create.go:284] running [docker network inspect custom-flannel-034446] to gather additional debugging logs...
	I1018 15:07:56.203149  383339 cli_runner.go:164] Run: docker network inspect custom-flannel-034446
	W1018 15:07:56.221246  383339 cli_runner.go:211] docker network inspect custom-flannel-034446 returned with exit code 1
	I1018 15:07:56.221276  383339 network_create.go:287] error running [docker network inspect custom-flannel-034446]: docker network inspect custom-flannel-034446: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-034446 not found
	I1018 15:07:56.221308  383339 network_create.go:289] output of [docker network inspect custom-flannel-034446]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-034446 not found
	
	** /stderr **
	I1018 15:07:56.221445  383339 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:07:56.241025  383339 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-67ded9675d49 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:eb:89:76:0f:a6} reservation:<nil>}
	I1018 15:07:56.241905  383339 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4b365c92bc46 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:52:db:b6:83:36:75} reservation:<nil>}
	I1018 15:07:56.242823  383339 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ab6063c7cdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:eb:32:cc:ab:b4} reservation:<nil>}
	I1018 15:07:56.243599  383339 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4924a2f99658 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8e:94:f2:fb:d4:4c} reservation:<nil>}
	I1018 15:07:56.244472  383339 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002045a70}
	I1018 15:07:56.244503  383339 network_create.go:124] attempt to create docker network custom-flannel-034446 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 15:07:56.244561  383339 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-034446 custom-flannel-034446
	I1018 15:07:56.310166  383339 network_create.go:108] docker network custom-flannel-034446 192.168.85.0/24 created
	I1018 15:07:56.310203  383339 kic.go:121] calculated static IP "192.168.85.2" for the "custom-flannel-034446" container
	I1018 15:07:56.310274  383339 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 15:07:56.329239  383339 cli_runner.go:164] Run: docker volume create custom-flannel-034446 --label name.minikube.sigs.k8s.io=custom-flannel-034446 --label created_by.minikube.sigs.k8s.io=true
	I1018 15:07:56.347356  383339 oci.go:103] Successfully created a docker volume custom-flannel-034446
	I1018 15:07:56.347465  383339 cli_runner.go:164] Run: docker run --rm --name custom-flannel-034446-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-034446 --entrypoint /usr/bin/test -v custom-flannel-034446:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 15:07:56.765584  383339 oci.go:107] Successfully prepared a docker volume custom-flannel-034446
	I1018 15:07:56.765631  383339 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:07:56.765649  383339 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 15:07:56.765716  383339 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-034446:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 15:07:59.757130  383339 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-034446:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (2.991358585s)
	I1018 15:07:59.757160  383339 kic.go:203] duration metric: took 2.991506775s to extract preloaded images to volume ...
	W1018 15:07:59.757252  383339 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 15:07:59.757285  383339 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 15:07:59.757324  383339 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 15:07:59.833225  383339 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-034446 --name custom-flannel-034446 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-034446 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-034446 --network custom-flannel-034446 --ip 192.168.85.2 --volume custom-flannel-034446:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 15:08:00.131308  383339 cli_runner.go:164] Run: docker container inspect custom-flannel-034446 --format={{.State.Running}}
	I1018 15:08:00.153831  383339 cli_runner.go:164] Run: docker container inspect custom-flannel-034446 --format={{.State.Status}}
	I1018 15:08:00.177055  383339 cli_runner.go:164] Run: docker exec custom-flannel-034446 stat /var/lib/dpkg/alternatives/iptables
	I1018 15:08:00.235822  383339 oci.go:144] the created container "custom-flannel-034446" has a running status.
	I1018 15:08:00.235870  383339 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-89690/.minikube/machines/custom-flannel-034446/id_rsa...
	I1018 15:08:00.437054  383339 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-89690/.minikube/machines/custom-flannel-034446/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 15:08:00.475812  383339 cli_runner.go:164] Run: docker container inspect custom-flannel-034446 --format={{.State.Status}}
	I1018 15:08:00.501471  383339 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 15:08:00.501498  383339 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-034446 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 15:08:00.555316  383339 cli_runner.go:164] Run: docker container inspect custom-flannel-034446 --format={{.State.Status}}
	I1018 15:08:00.580408  383339 machine.go:93] provisionDockerMachine start ...
	I1018 15:08:00.580527  383339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034446
	I1018 15:08:00.603969  383339 main.go:141] libmachine: Using SSH client type: native
	I1018 15:08:00.604354  383339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1018 15:08:00.604385  383339 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 15:08:00.751927  383339 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-034446
	
	I1018 15:08:00.751957  383339 ubuntu.go:182] provisioning hostname "custom-flannel-034446"
	I1018 15:08:00.752020  383339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034446
	I1018 15:08:00.774320  383339 main.go:141] libmachine: Using SSH client type: native
	I1018 15:08:00.774539  383339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1018 15:08:00.774551  383339 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-034446 && echo "custom-flannel-034446" | sudo tee /etc/hostname
	I1018 15:08:00.929798  383339 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-034446
	
	I1018 15:08:00.929890  383339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034446
	I1018 15:08:00.952148  383339 main.go:141] libmachine: Using SSH client type: native
	I1018 15:08:00.952473  383339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1018 15:08:00.952504  383339 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-034446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-034446/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-034446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 15:07:59.381975  379859 cli_runner.go:164] Run: docker network inspect calico-034446 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 15:07:59.399900  379859 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 15:07:59.404521  379859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:07:59.444084  379859 kubeadm.go:883] updating cluster {Name:calico-034446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-034446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 15:07:59.444257  379859 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 15:07:59.444313  379859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:07:59.476062  379859 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:07:59.476085  379859 crio.go:433] Images already preloaded, skipping extraction
	I1018 15:07:59.476140  379859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 15:07:59.504254  379859 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 15:07:59.504278  379859 cache_images.go:85] Images are preloaded, skipping loading
	I1018 15:07:59.504287  379859 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 15:07:59.504389  379859 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-034446 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-034446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1018 15:07:59.504468  379859 ssh_runner.go:195] Run: crio config
	I1018 15:07:59.572599  379859 cni.go:84] Creating CNI manager for "calico"
	I1018 15:07:59.572635  379859 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 15:07:59.572668  379859 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-034446 NodeName:calico-034446 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 15:07:59.572800  379859 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-034446"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 15:07:59.572865  379859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 15:07:59.582318  379859 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 15:07:59.582397  379859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 15:07:59.590886  379859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 15:07:59.655973  379859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 15:07:59.739948  379859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1018 15:07:59.754895  379859 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 15:07:59.759750  379859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 15:07:59.771907  379859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 15:07:59.888340  379859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 15:07:59.912937  379859 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446 for IP: 192.168.76.2
	I1018 15:07:59.912960  379859 certs.go:195] generating shared ca certs ...
	I1018 15:07:59.912980  379859 certs.go:227] acquiring lock for ca certs: {Name:mk6d3b4e44856f498157352ffe8bb89d2c7a3998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:59.913253  379859 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key
	I1018 15:07:59.913323  379859 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key
	I1018 15:07:59.913340  379859 certs.go:257] generating profile certs ...
	I1018 15:07:59.913410  379859 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/client.key
	I1018 15:07:59.913432  379859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/client.crt with IP's: []
	I1018 15:07:59.992174  379859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/client.crt ...
	I1018 15:07:59.992204  379859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/client.crt: {Name:mkb5b8eb4a84517665f50ce051ed2837e91da765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:59.992390  379859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/client.key ...
	I1018 15:07:59.992407  379859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/client.key: {Name:mk9c26108233923b4c58f5088f0ed2e9812425d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:07:59.992526  379859 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/apiserver.key.d2b7031d
	I1018 15:07:59.992551  379859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/apiserver.crt.d2b7031d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 15:08:00.114261  379859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/apiserver.crt.d2b7031d ...
	I1018 15:08:00.114298  379859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/apiserver.crt.d2b7031d: {Name:mk166078d6798d65497a687a6f0252d14a7195a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:08:00.114512  379859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/apiserver.key.d2b7031d ...
	I1018 15:08:00.114538  379859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/apiserver.key.d2b7031d: {Name:mk97a6fe7f99e9c14ad544b8bfe9d1c120194f3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:08:00.114647  379859 certs.go:382] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/apiserver.crt.d2b7031d -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/apiserver.crt
	I1018 15:08:00.114737  379859 certs.go:386] copying /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/apiserver.key.d2b7031d -> /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/apiserver.key
	I1018 15:08:00.114804  379859 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/proxy-client.key
	I1018 15:08:00.114824  379859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/proxy-client.crt with IP's: []
	I1018 15:08:00.738747  379859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/proxy-client.crt ...
	I1018 15:08:00.738777  379859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/proxy-client.crt: {Name:mk339a877b9b7d4e3da29c4a398d7efa8b0abde6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:08:00.738997  379859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/proxy-client.key ...
	I1018 15:08:00.739015  379859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/proxy-client.key: {Name:mkfc835fb63cd3f715eed760075030e05f0f74e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 15:08:00.739287  379859 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem (1338 bytes)
	W1018 15:08:00.739339  379859 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187_empty.pem, impossibly tiny 0 bytes
	I1018 15:08:00.739353  379859 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 15:08:00.739383  379859 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/ca.pem (1082 bytes)
	I1018 15:08:00.739414  379859 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/cert.pem (1123 bytes)
	I1018 15:08:00.739447  379859 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/certs/key.pem (1675 bytes)
	I1018 15:08:00.739500  379859 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem (1708 bytes)
	I1018 15:08:00.740287  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 15:08:00.762829  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 15:08:00.784626  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 15:08:00.806213  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 15:08:00.827578  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 15:08:00.848895  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 15:08:00.870848  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 15:08:00.890304  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/calico-034446/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 15:08:00.912293  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 15:08:00.934118  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/certs/93187.pem --> /usr/share/ca-certificates/93187.pem (1338 bytes)
	I1018 15:08:00.956635  379859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/ssl/certs/931872.pem --> /usr/share/ca-certificates/931872.pem (1708 bytes)
	I1018 15:08:00.977464  379859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 15:08:00.992800  379859 ssh_runner.go:195] Run: openssl version
	I1018 15:08:01.001420  379859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/931872.pem && ln -fs /usr/share/ca-certificates/931872.pem /etc/ssl/certs/931872.pem"
	I1018 15:08:01.012362  379859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/931872.pem
	I1018 15:08:01.017081  379859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 14:26 /usr/share/ca-certificates/931872.pem
	I1018 15:08:01.017156  379859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/931872.pem
	I1018 15:08:01.055275  379859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/931872.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 15:08:01.065687  379859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 15:08:01.075374  379859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:08:01.079344  379859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 14:15 /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:08:01.079422  379859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 15:08:01.123359  379859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 15:08:01.132629  379859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93187.pem && ln -fs /usr/share/ca-certificates/93187.pem /etc/ssl/certs/93187.pem"
	I1018 15:08:01.141972  379859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93187.pem
	I1018 15:08:01.146366  379859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 14:26 /usr/share/ca-certificates/93187.pem
	I1018 15:08:01.146420  379859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93187.pem
	I1018 15:08:01.185378  379859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93187.pem /etc/ssl/certs/51391683.0"
	I1018 15:08:01.196374  379859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 15:08:01.200809  379859 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 15:08:01.200875  379859 kubeadm.go:400] StartCluster: {Name:calico-034446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-034446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 15:08:01.200998  379859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 15:08:01.201059  379859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 15:08:01.231735  379859 cri.go:89] found id: ""
	I1018 15:08:01.231800  379859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 15:08:01.240504  379859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 15:08:01.249730  379859 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 15:08:01.249776  379859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 15:08:01.258952  379859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 15:08:01.258976  379859 kubeadm.go:157] found existing configuration files:
	
	I1018 15:08:01.259057  379859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 15:08:01.266983  379859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 15:08:01.267042  379859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 15:08:01.274808  379859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 15:08:01.283325  379859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 15:08:01.283379  379859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 15:08:01.291302  379859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 15:08:01.299359  379859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 15:08:01.299423  379859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 15:08:01.307393  379859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 15:08:01.316040  379859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 15:08:01.316093  379859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 15:08:01.323780  379859 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 15:08:01.367638  379859 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 15:08:01.367743  379859 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 15:08:01.402007  379859 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 15:08:01.402090  379859 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 15:08:01.402144  379859 kubeadm.go:318] OS: Linux
	I1018 15:08:01.402203  379859 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 15:08:01.402274  379859 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 15:08:01.402337  379859 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 15:08:01.402437  379859 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 15:08:01.402534  379859 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 15:08:01.402603  379859 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 15:08:01.402852  379859 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 15:08:01.402939  379859 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 15:08:01.482944  379859 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 15:08:01.483078  379859 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 15:08:01.483204  379859 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 15:08:01.495061  379859 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Oct 18 15:07:32 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:32.751418566Z" level=info msg="Started container" PID=1733 containerID=046b564824188700f697eac314f7834e14846486259c62eef45f471e6c86c38e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb/dashboard-metrics-scraper id=d6ecceaf-77db-4c99-96fc-eea78c1e3630 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4fa1982da655dc27d60c5857fc0ffa49e1d5550f8f7d54eaec61de66f148f2ac
	Oct 18 15:07:32 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:32.815078733Z" level=info msg="Removing container: ee03aabf5315fee089700a9ccdda4fd506ff394bf4bc8ba057c76b45c6314c87" id=165b1552-4ab1-4f65-a9be-1bf2f0901340 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:07:32 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:32.827853374Z" level=info msg="Removed container ee03aabf5315fee089700a9ccdda4fd506ff394bf4bc8ba057c76b45c6314c87: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb/dashboard-metrics-scraper" id=165b1552-4ab1-4f65-a9be-1bf2f0901340 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.83868993Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=eaf4b023-1405-4e58-961c-05db14fa0d87 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.839838128Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b9ab206f-f837-454f-9b97-fa86687eebe6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.840865193Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=046eaaa6-cee0-4424-b428-11e381f2e88f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.84258865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.849163468Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.849391003Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5d2600f37e4862493c2e0a2d71ee544934cfb8f12d051e1a930e68719135bacb/merged/etc/passwd: no such file or directory"
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.849430037Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5d2600f37e4862493c2e0a2d71ee544934cfb8f12d051e1a930e68719135bacb/merged/etc/group: no such file or directory"
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.84976059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.88489648Z" level=info msg="Created container 40928802aedeab3c2a9b90afd2b1acb3c9667f75da95b3ef38a0964e568b3ffd: kube-system/storage-provisioner/storage-provisioner" id=046eaaa6-cee0-4424-b428-11e381f2e88f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.88563625Z" level=info msg="Starting container: 40928802aedeab3c2a9b90afd2b1acb3c9667f75da95b3ef38a0964e568b3ffd" id=60562457-b9f3-444e-9f2c-549682adb683 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:07:41 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:41.88802898Z" level=info msg="Started container" PID=1747 containerID=40928802aedeab3c2a9b90afd2b1acb3c9667f75da95b3ef38a0964e568b3ffd description=kube-system/storage-provisioner/storage-provisioner id=60562457-b9f3-444e-9f2c-549682adb683 name=/runtime.v1.RuntimeService/StartContainer sandboxID=41fbbd2ac92d2c9847e389df194a24c9d07fa7416de48c83075c678b3da65310
	Oct 18 15:07:53 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:53.697667789Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fc2f135d-14d5-4363-b70f-f7c6c63ebeff name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:53 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:53.761658332Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e945e9a5-0040-4945-8892-89e7bccac421 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 15:07:53 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:53.767362495Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb/dashboard-metrics-scraper" id=47afbe6c-ff43-4cdc-9dee-0563a7743bc1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:53 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:53.767693081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:53 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:53.857077965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:53 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:53.857817623Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 15:07:53 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:53.916719714Z" level=info msg="Created container 27c889175cb65cf3399ec27dae37172afc92f1f6b367c006c3c16f8a1c539efb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb/dashboard-metrics-scraper" id=47afbe6c-ff43-4cdc-9dee-0563a7743bc1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 15:07:53 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:53.917841692Z" level=info msg="Starting container: 27c889175cb65cf3399ec27dae37172afc92f1f6b367c006c3c16f8a1c539efb" id=f890b3a5-999c-4939-bfac-4a601b2f90cf name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 15:07:53 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:53.921366648Z" level=info msg="Started container" PID=1781 containerID=27c889175cb65cf3399ec27dae37172afc92f1f6b367c006c3c16f8a1c539efb description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb/dashboard-metrics-scraper id=f890b3a5-999c-4939-bfac-4a601b2f90cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=4fa1982da655dc27d60c5857fc0ffa49e1d5550f8f7d54eaec61de66f148f2ac
	Oct 18 15:07:54 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:54.87956159Z" level=info msg="Removing container: 046b564824188700f697eac314f7834e14846486259c62eef45f471e6c86c38e" id=baf15801-7063-4731-ac57-a66b1244330d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 15:07:54 default-k8s-diff-port-489104 crio[560]: time="2025-10-18T15:07:54.89252571Z" level=info msg="Removed container 046b564824188700f697eac314f7834e14846486259c62eef45f471e6c86c38e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb/dashboard-metrics-scraper" id=baf15801-7063-4731-ac57-a66b1244330d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	27c889175cb65       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   4fa1982da655d       dashboard-metrics-scraper-6ffb444bf9-9nlsb             kubernetes-dashboard
	40928802aedea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   41fbbd2ac92d2       storage-provisioner                                    kube-system
	e38e8320d8865       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   a64e01e6999b3       kubernetes-dashboard-855c9754f9-7nj88                  kubernetes-dashboard
	0610acd94046a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   e0b901cc3983b       busybox                                                default
	e88a1356120d2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   fbb83c221ad56       coredns-66bc5c9577-dtjgd                               kube-system
	4159ca4d468f4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   bf0ce8a1e3b2f       kindnet-nvnw6                                          kube-system
	b87ad405fde62       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  0                   d21a4793b7aa2       kube-proxy-7wbfs                                       kube-system
	09fa1b647fa4f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   41fbbd2ac92d2       storage-provisioner                                    kube-system
	7fb5589151f3a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   98de4f1e6a15a       etcd-default-k8s-diff-port-489104                      kube-system
	ce9720cd32591       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   41ae85fcc320c       kube-scheduler-default-k8s-diff-port-489104            kube-system
	1e308c368e373       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   ec03206507acf       kube-apiserver-default-k8s-diff-port-489104            kube-system
	2358e366cd975       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   68dc57583544a       kube-controller-manager-default-k8s-diff-port-489104   kube-system
	
	
	==> coredns [e88a1356120d204fdc4589ff5088ddfaf6f22f9da2c191956e946643ae7c3ae2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49606 - 4796 "HINFO IN 3785732568142779832.626035667130173452. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.08647486s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-489104
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-489104
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=default-k8s-diff-port-489104
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T15_06_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 15:06:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-489104
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 15:07:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 15:07:40 +0000   Sat, 18 Oct 2025 15:06:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 15:07:40 +0000   Sat, 18 Oct 2025 15:06:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 15:07:40 +0000   Sat, 18 Oct 2025 15:06:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 15:07:40 +0000   Sat, 18 Oct 2025 15:06:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-489104
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                2a8259a7-7ba4-40c3-bcf3-f004f9ae6965
	  Boot ID:                    58333a2c-0d6f-4a29-a163-c3e980bea12f
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-dtjgd                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-default-k8s-diff-port-489104                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-nvnw6                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-default-k8s-diff-port-489104             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-489104    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-7wbfs                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-default-k8s-diff-port-489104             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-9nlsb              0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7nj88                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node default-k8s-diff-port-489104 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node default-k8s-diff-port-489104 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node default-k8s-diff-port-489104 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node default-k8s-diff-port-489104 event: Registered Node default-k8s-diff-port-489104 in Controller
	  Normal  NodeReady                96s                kubelet          Node default-k8s-diff-port-489104 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-489104 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-489104 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-489104 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node default-k8s-diff-port-489104 event: Registered Node default-k8s-diff-port-489104 in Controller
	
	
	==> dmesg <==
	[Oct18 14:18] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.019741] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023884] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +1.023909] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +2.047782] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +4.031551] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[  +8.127223] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[ +16.383384] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	[Oct18 14:19] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: aa 39 5f 6f c4 53 62 16 b9 3d 57 4f 08 00
	
	
	==> etcd [7fb5589151f3a78025f09ce1b546891fb02d25162971c57434743de3e24cbe9f] <==
	{"level":"warn","ts":"2025-10-18T15:07:09.672110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.679603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.686908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.694953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.703841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.711163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.720669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.728424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.735450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.743043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.750578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.758148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.765575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.774178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.782458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.789275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.796871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.805419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.815049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.825847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.842353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T15:07:09.905475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42184","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T15:07:18.890000Z","caller":"traceutil/trace.go:172","msg":"trace[1654892729] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"118.092431ms","start":"2025-10-18T15:07:18.771883Z","end":"2025-10-18T15:07:18.889976Z","steps":["trace[1654892729] 'process raft request'  (duration: 89.614533ms)","trace[1654892729] 'compare'  (duration: 28.347448ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T15:07:19.870979Z","caller":"traceutil/trace.go:172","msg":"trace[1088078680] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"103.83802ms","start":"2025-10-18T15:07:19.767122Z","end":"2025-10-18T15:07:19.870960Z","steps":["trace[1088078680] 'process raft request'  (duration: 103.683031ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T15:07:20.002036Z","caller":"traceutil/trace.go:172","msg":"trace[1900141265] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"231.268397ms","start":"2025-10-18T15:07:19.770741Z","end":"2025-10-18T15:07:20.002009Z","steps":["trace[1900141265] 'process raft request'  (duration: 225.407062ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:08:03 up  2:50,  0 user,  load average: 5.40, 3.73, 2.36
	Linux default-k8s-diff-port-489104 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4159ca4d468f49826a85e981582c651a18b8a39a2bddebeb07af010243f0d04f] <==
	I1018 15:07:11.310507       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 15:07:11.310991       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 15:07:11.311222       1 main.go:148] setting mtu 1500 for CNI 
	I1018 15:07:11.311287       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 15:07:11.311339       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T15:07:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 15:07:11.607896       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 15:07:11.608031       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 15:07:11.608061       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 15:07:11.608449       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 15:07:12.008547       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 15:07:12.008578       1 metrics.go:72] Registering metrics
	I1018 15:07:12.008655       1 controller.go:711] "Syncing nftables rules"
	I1018 15:07:21.608297       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:07:21.608343       1 main.go:301] handling current node
	I1018 15:07:31.610044       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:07:31.610092       1 main.go:301] handling current node
	I1018 15:07:41.607898       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:07:41.608032       1 main.go:301] handling current node
	I1018 15:07:51.614035       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:07:51.614076       1 main.go:301] handling current node
	I1018 15:08:01.614022       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 15:08:01.614065       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1e308c368e373ccff9c4504f9e6503c09e4a7d1e0200e60472eaf38378135b96] <==
	I1018 15:07:10.431471       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 15:07:10.431530       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 15:07:10.431826       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 15:07:10.431883       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 15:07:10.431988       1 aggregator.go:171] initial CRD sync complete...
	I1018 15:07:10.432012       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 15:07:10.432019       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 15:07:10.432025       1 cache.go:39] Caches are synced for autoregister controller
	I1018 15:07:10.432175       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1018 15:07:10.440567       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 15:07:10.450092       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 15:07:10.453415       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 15:07:10.455346       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 15:07:10.752632       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 15:07:10.770073       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 15:07:10.802772       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 15:07:10.827750       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 15:07:10.838666       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 15:07:10.912328       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.68.150"}
	I1018 15:07:10.925596       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.156.180"}
	I1018 15:07:11.328056       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 15:07:14.168578       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 15:07:14.218582       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 15:07:14.218582       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 15:07:14.269006       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [2358e366cd9757f3067562185021f0051cae924e07f221015b53a392bf5f90b2] <==
	I1018 15:07:13.731829       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 15:07:13.736242       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 15:07:13.761557       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 15:07:13.764870       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 15:07:13.764926       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 15:07:13.764945       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 15:07:13.764995       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 15:07:13.765005       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 15:07:13.765015       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 15:07:13.765023       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 15:07:13.766309       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 15:07:13.768143       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 15:07:13.770089       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 15:07:13.771067       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 15:07:13.771107       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 15:07:13.773667       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 15:07:13.773795       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 15:07:13.773876       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 15:07:13.773908       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-489104"
	I1018 15:07:13.773978       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 15:07:13.774747       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 15:07:13.777043       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 15:07:13.778971       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 15:07:13.781140       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 15:07:13.792510       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b87ad405fde623f73cb099b26bcf0ab11f50050837725bf409c775dfa67cde02] <==
	I1018 15:07:11.118130       1 server_linux.go:53] "Using iptables proxy"
	I1018 15:07:11.175756       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 15:07:11.276391       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 15:07:11.276446       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1018 15:07:11.276552       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 15:07:11.296480       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 15:07:11.296539       1 server_linux.go:132] "Using iptables Proxier"
	I1018 15:07:11.302237       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 15:07:11.303190       1 server.go:527] "Version info" version="v1.34.1"
	I1018 15:07:11.303274       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:07:11.305683       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 15:07:11.305705       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 15:07:11.305748       1 config.go:200] "Starting service config controller"
	I1018 15:07:11.305753       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 15:07:11.305769       1 config.go:106] "Starting endpoint slice config controller"
	I1018 15:07:11.305774       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 15:07:11.305981       1 config.go:309] "Starting node config controller"
	I1018 15:07:11.305995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 15:07:11.306003       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 15:07:11.406483       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 15:07:11.406503       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 15:07:11.406487       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ce9720cd32591b6942daf642e28e8696920a5b3fcb4f8eddcd689c9ef3054c1e] <==
	I1018 15:07:09.312976       1 serving.go:386] Generated self-signed cert in-memory
	W1018 15:07:10.356211       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 15:07:10.356344       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 15:07:10.356362       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 15:07:10.356371       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 15:07:10.406249       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 15:07:10.406289       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 15:07:10.414973       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:07:10.415022       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 15:07:10.417026       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 15:07:10.417130       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 15:07:10.515574       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 15:07:14 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:14.395840     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1b6c4748-2e70-49d6-9351-f74aafc76edc-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-9nlsb\" (UID: \"1b6c4748-2e70-49d6-9351-f74aafc76edc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb"
	Oct 18 15:07:17 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:17.755966     718 scope.go:117] "RemoveContainer" containerID="dca9d7ac081caa4232673650ff3363664e763b4bcdb566b3879738a394e9aa73"
	Oct 18 15:07:18 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:18.761410     718 scope.go:117] "RemoveContainer" containerID="dca9d7ac081caa4232673650ff3363664e763b4bcdb566b3879738a394e9aa73"
	Oct 18 15:07:18 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:18.761651     718 scope.go:117] "RemoveContainer" containerID="ee03aabf5315fee089700a9ccdda4fd506ff394bf4bc8ba057c76b45c6314c87"
	Oct 18 15:07:18 default-k8s-diff-port-489104 kubelet[718]: E1018 15:07:18.761812     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9nlsb_kubernetes-dashboard(1b6c4748-2e70-49d6-9351-f74aafc76edc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb" podUID="1b6c4748-2e70-49d6-9351-f74aafc76edc"
	Oct 18 15:07:19 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:19.764033     718 scope.go:117] "RemoveContainer" containerID="ee03aabf5315fee089700a9ccdda4fd506ff394bf4bc8ba057c76b45c6314c87"
	Oct 18 15:07:19 default-k8s-diff-port-489104 kubelet[718]: E1018 15:07:19.764199     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9nlsb_kubernetes-dashboard(1b6c4748-2e70-49d6-9351-f74aafc76edc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb" podUID="1b6c4748-2e70-49d6-9351-f74aafc76edc"
	Oct 18 15:07:20 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:20.766789     718 scope.go:117] "RemoveContainer" containerID="ee03aabf5315fee089700a9ccdda4fd506ff394bf4bc8ba057c76b45c6314c87"
	Oct 18 15:07:20 default-k8s-diff-port-489104 kubelet[718]: E1018 15:07:20.767085     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9nlsb_kubernetes-dashboard(1b6c4748-2e70-49d6-9351-f74aafc76edc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb" podUID="1b6c4748-2e70-49d6-9351-f74aafc76edc"
	Oct 18 15:07:25 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:25.705534     718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7nj88" podStartSLOduration=4.749829879 podStartE2EDuration="11.705510409s" podCreationTimestamp="2025-10-18 15:07:14 +0000 UTC" firstStartedPulling="2025-10-18 15:07:14.673127478 +0000 UTC m=+7.094147298" lastFinishedPulling="2025-10-18 15:07:21.628807985 +0000 UTC m=+14.049827828" observedRunningTime="2025-10-18 15:07:21.787991202 +0000 UTC m=+14.209011045" watchObservedRunningTime="2025-10-18 15:07:25.705510409 +0000 UTC m=+18.126530252"
	Oct 18 15:07:32 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:32.697058     718 scope.go:117] "RemoveContainer" containerID="ee03aabf5315fee089700a9ccdda4fd506ff394bf4bc8ba057c76b45c6314c87"
	Oct 18 15:07:32 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:32.811730     718 scope.go:117] "RemoveContainer" containerID="ee03aabf5315fee089700a9ccdda4fd506ff394bf4bc8ba057c76b45c6314c87"
	Oct 18 15:07:32 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:32.811983     718 scope.go:117] "RemoveContainer" containerID="046b564824188700f697eac314f7834e14846486259c62eef45f471e6c86c38e"
	Oct 18 15:07:32 default-k8s-diff-port-489104 kubelet[718]: E1018 15:07:32.812166     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9nlsb_kubernetes-dashboard(1b6c4748-2e70-49d6-9351-f74aafc76edc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb" podUID="1b6c4748-2e70-49d6-9351-f74aafc76edc"
	Oct 18 15:07:40 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:40.291410     718 scope.go:117] "RemoveContainer" containerID="046b564824188700f697eac314f7834e14846486259c62eef45f471e6c86c38e"
	Oct 18 15:07:40 default-k8s-diff-port-489104 kubelet[718]: E1018 15:07:40.291641     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9nlsb_kubernetes-dashboard(1b6c4748-2e70-49d6-9351-f74aafc76edc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb" podUID="1b6c4748-2e70-49d6-9351-f74aafc76edc"
	Oct 18 15:07:41 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:41.838257     718 scope.go:117] "RemoveContainer" containerID="09fa1b647fa4fd7599c9fa5e528e44f54acc68f2fbf3314632c5c794ff039576"
	Oct 18 15:07:53 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:53.697137     718 scope.go:117] "RemoveContainer" containerID="046b564824188700f697eac314f7834e14846486259c62eef45f471e6c86c38e"
	Oct 18 15:07:54 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:54.877241     718 scope.go:117] "RemoveContainer" containerID="046b564824188700f697eac314f7834e14846486259c62eef45f471e6c86c38e"
	Oct 18 15:07:54 default-k8s-diff-port-489104 kubelet[718]: I1018 15:07:54.877514     718 scope.go:117] "RemoveContainer" containerID="27c889175cb65cf3399ec27dae37172afc92f1f6b367c006c3c16f8a1c539efb"
	Oct 18 15:07:54 default-k8s-diff-port-489104 kubelet[718]: E1018 15:07:54.878657     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9nlsb_kubernetes-dashboard(1b6c4748-2e70-49d6-9351-f74aafc76edc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9nlsb" podUID="1b6c4748-2e70-49d6-9351-f74aafc76edc"
	Oct 18 15:07:57 default-k8s-diff-port-489104 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 15:07:57 default-k8s-diff-port-489104 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 15:07:57 default-k8s-diff-port-489104 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 15:07:57 default-k8s-diff-port-489104 systemd[1]: kubelet.service: Consumed 1.699s CPU time.
	
	
	==> kubernetes-dashboard [e38e8320d8865a084f077cc5774404b5b2815ffd7589ad4d5844cfa0edc768c5] <==
	2025/10/18 15:07:21 Using namespace: kubernetes-dashboard
	2025/10/18 15:07:21 Using in-cluster config to connect to apiserver
	2025/10/18 15:07:21 Using secret token for csrf signing
	2025/10/18 15:07:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 15:07:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 15:07:21 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 15:07:21 Generating JWE encryption key
	2025/10/18 15:07:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 15:07:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 15:07:21 Initializing JWE encryption key from synchronized object
	2025/10/18 15:07:21 Creating in-cluster Sidecar client
	2025/10/18 15:07:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 15:07:21 Serving insecurely on HTTP port: 9090
	2025/10/18 15:07:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 15:07:21 Starting overwatch
	
	
	==> storage-provisioner [09fa1b647fa4fd7599c9fa5e528e44f54acc68f2fbf3314632c5c794ff039576] <==
	I1018 15:07:11.082635       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 15:07:41.086373       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [40928802aedeab3c2a9b90afd2b1acb3c9667f75da95b3ef38a0964e568b3ffd] <==
	I1018 15:07:41.902555       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 15:07:41.911748       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 15:07:41.911794       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 15:07:41.914361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:45.370572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:49.631993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:53.231384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:56.285172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:59.308482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:59.335900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 15:07:59.336057       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 15:07:59.336210       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-489104_d7c07fb2-c394-439a-95ba-5f6ceb0d640f!
	I1018 15:07:59.336191       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b77dfb48-26a4-4c5e-9880-c5c307861880", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-489104_d7c07fb2-c394-439a-95ba-5f6ceb0d640f became leader
	W1018 15:07:59.338450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:07:59.341974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 15:07:59.436995       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-489104_d7c07fb2-c394-439a-95ba-5f6ceb0d640f!
	W1018 15:08:01.346103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:08:01.352663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:08:03.357029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 15:08:03.361360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-489104 -n default-k8s-diff-port-489104
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-489104 -n default-k8s-diff-port-489104: exit status 2 (460.69553ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-489104 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.83s)

                                                
                                    

Test pass (259/326)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.46
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 3.47
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.39
21 TestBinaryMirror 0.8
22 TestOffline 94.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 166.99
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 8.44
48 TestAddons/StoppedEnableDisable 16.8
49 TestCertOptions 26.86
50 TestCertExpiration 212.1
52 TestForceSystemdFlag 23.77
53 TestForceSystemdEnv 26.86
55 TestKVMDriverInstallOrUpdate 1.3
59 TestErrorSpam/setup 24.12
60 TestErrorSpam/start 0.62
61 TestErrorSpam/status 0.91
62 TestErrorSpam/pause 6.41
63 TestErrorSpam/unpause 6.42
64 TestErrorSpam/stop 8.06
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 38.63
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.71
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.56
76 TestFunctional/serial/CacheCmd/cache/add_local 1.6
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.49
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 41.91
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.22
87 TestFunctional/serial/LogsFileCmd 1.24
88 TestFunctional/serial/InvalidService 3.94
90 TestFunctional/parallel/ConfigCmd 0.35
92 TestFunctional/parallel/DryRun 0.36
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 0.91
99 TestFunctional/parallel/AddonsCmd 0.13
102 TestFunctional/parallel/SSHCmd 0.64
103 TestFunctional/parallel/CpCmd 1.69
104 TestFunctional/parallel/MySQL 17.16
105 TestFunctional/parallel/FileSync 0.3
106 TestFunctional/parallel/CertSync 1.73
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
114 TestFunctional/parallel/License 0.41
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 0.48
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
122 TestFunctional/parallel/MountCmd/any-port 73.15
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
126 TestFunctional/parallel/MountCmd/specific-port 1.72
127 TestFunctional/parallel/MountCmd/VerifyCleanup 1.78
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
135 TestFunctional/parallel/ProfileCmd/profile_list 0.38
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.62
142 TestFunctional/parallel/ImageCommands/Setup 1.53
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
150 TestFunctional/parallel/ServiceCmd/List 1.69
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.69
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 150.55
163 TestMultiControlPlane/serial/DeployApp 5.87
164 TestMultiControlPlane/serial/PingHostFromPods 0.97
165 TestMultiControlPlane/serial/AddWorkerNode 24.61
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
168 TestMultiControlPlane/serial/CopyFile 16.42
169 TestMultiControlPlane/serial/StopSecondaryNode 14.25
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
171 TestMultiControlPlane/serial/RestartSecondaryNode 9.03
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 100.72
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.5
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
176 TestMultiControlPlane/serial/StopCluster 41.39
177 TestMultiControlPlane/serial/RestartCluster 51.49
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
179 TestMultiControlPlane/serial/AddSecondaryNode 35.65
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
184 TestJSONOutput/start/Command 39.48
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 7.95
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.21
209 TestKicCustomNetwork/create_custom_network 27.78
210 TestKicCustomNetwork/use_default_bridge_network 24.54
211 TestKicExistingNetwork 25.06
212 TestKicCustomSubnet 27.78
213 TestKicStaticIP 28.06
214 TestMainNoArgs 0.05
215 TestMinikubeProfile 50.46
218 TestMountStart/serial/StartWithMountFirst 5.66
219 TestMountStart/serial/VerifyMountFirst 0.27
220 TestMountStart/serial/StartWithMountSecond 5.56
221 TestMountStart/serial/VerifyMountSecond 0.26
222 TestMountStart/serial/DeleteFirst 1.71
223 TestMountStart/serial/VerifyMountPostDelete 0.27
224 TestMountStart/serial/Stop 1.24
225 TestMountStart/serial/RestartStopped 7.19
226 TestMountStart/serial/VerifyMountPostStop 0.26
229 TestMultiNode/serial/FreshStart2Nodes 90.13
230 TestMultiNode/serial/DeployApp2Nodes 4.8
231 TestMultiNode/serial/PingHostFrom2Pods 0.67
232 TestMultiNode/serial/AddNode 23.92
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.64
235 TestMultiNode/serial/CopyFile 9.39
236 TestMultiNode/serial/StopNode 2.24
237 TestMultiNode/serial/StartAfterStop 7.46
238 TestMultiNode/serial/RestartKeepsNodes 81.81
239 TestMultiNode/serial/DeleteNode 5.2
240 TestMultiNode/serial/StopMultiNode 30.3
241 TestMultiNode/serial/RestartMultiNode 25.81
242 TestMultiNode/serial/ValidateNameConflict 23.12
247 TestPreload 96.49
249 TestScheduledStopUnix 97.64
252 TestInsufficientStorage 9.69
253 TestRunningBinaryUpgrade 69.89
255 TestKubernetesUpgrade 315.9
256 TestMissingContainerUpgrade 79.78
257 TestStoppedBinaryUpgrade/Setup 0.53
258 TestStoppedBinaryUpgrade/Upgrade 61.48
259 TestStoppedBinaryUpgrade/MinikubeLogs 1.07
268 TestPause/serial/Start 69.36
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
271 TestNoKubernetes/serial/StartWithK8s 23.85
272 TestNoKubernetes/serial/StartWithStopK8s 8.38
280 TestNetworkPlugins/group/false 3.35
284 TestNoKubernetes/serial/Start 8.11
285 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
286 TestNoKubernetes/serial/ProfileList 34.05
287 TestPause/serial/SecondStartNoReconfiguration 5.86
289 TestNoKubernetes/serial/Stop 1.32
290 TestNoKubernetes/serial/StartNoArgs 8.95
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
293 TestStartStop/group/old-k8s-version/serial/FirstStart 49.06
294 TestStartStop/group/old-k8s-version/serial/DeployApp 10.29
296 TestStartStop/group/no-preload/serial/FirstStart 51.94
298 TestStartStop/group/old-k8s-version/serial/Stop 17.04
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
300 TestStartStop/group/old-k8s-version/serial/SecondStart 50.51
301 TestStartStop/group/no-preload/serial/DeployApp 8.3
303 TestStartStop/group/no-preload/serial/Stop 16.47
304 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
307 TestStartStop/group/embed-certs/serial/FirstStart 40.55
308 TestStartStop/group/no-preload/serial/SecondStart 51.64
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
313 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.62
314 TestStartStop/group/embed-certs/serial/DeployApp 8.27
316 TestStartStop/group/newest-cni/serial/FirstStart 26.87
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/embed-certs/serial/Stop 16.28
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.3
322 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
326 TestStartStop/group/embed-certs/serial/SecondStart 46.73
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.12
328 TestNetworkPlugins/group/auto/Start 42.29
329 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/Stop 2.89
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
333 TestStartStop/group/newest-cni/serial/SecondStart 12.85
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.31
335 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 46.09
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
340 TestNetworkPlugins/group/kindnet/Start 42.64
341 TestNetworkPlugins/group/auto/KubeletFlags 0.28
342 TestNetworkPlugins/group/auto/NetCatPod 9.2
343 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
344 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
345 TestNetworkPlugins/group/auto/DNS 0.13
346 TestNetworkPlugins/group/auto/Localhost 0.1
347 TestNetworkPlugins/group/auto/HairPin 0.12
348 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
350 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
351 TestNetworkPlugins/group/calico/Start 49.08
352 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
353 TestNetworkPlugins/group/custom-flannel/Start 48.06
354 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
355 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
358 TestNetworkPlugins/group/kindnet/NetCatPod 8.25
359 TestNetworkPlugins/group/enable-default-cni/Start 41.86
360 TestNetworkPlugins/group/kindnet/DNS 0.14
361 TestNetworkPlugins/group/kindnet/Localhost 0.09
362 TestNetworkPlugins/group/kindnet/HairPin 0.09
363 TestNetworkPlugins/group/flannel/Start 51.67
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.28
366 TestNetworkPlugins/group/calico/NetCatPod 10.21
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
369 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
370 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.19
371 TestNetworkPlugins/group/custom-flannel/DNS 0.14
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.09
374 TestNetworkPlugins/group/calico/DNS 0.12
375 TestNetworkPlugins/group/calico/Localhost 0.09
376 TestNetworkPlugins/group/calico/HairPin 0.09
377 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
378 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
379 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
380 TestNetworkPlugins/group/bridge/Start 39.93
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
383 TestNetworkPlugins/group/flannel/NetCatPod 8.17
384 TestNetworkPlugins/group/flannel/DNS 0.12
385 TestNetworkPlugins/group/flannel/Localhost 0.1
386 TestNetworkPlugins/group/flannel/HairPin 0.09
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
388 TestNetworkPlugins/group/bridge/NetCatPod 9.2
389 TestNetworkPlugins/group/bridge/DNS 0.11
390 TestNetworkPlugins/group/bridge/Localhost 0.09
391 TestNetworkPlugins/group/bridge/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (5.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-498093 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-498093 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.457411999s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1018 14:15:04.990385   93187 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1018 14:15:04.990513   93187 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-498093
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-498093: exit status 85 (63.977343ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-498093 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-498093 │ jenkins │ v1.37.0 │ 18 Oct 25 14:14 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 14:14:59
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 14:14:59.575562   93199 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:14:59.575854   93199 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:14:59.575864   93199 out.go:374] Setting ErrFile to fd 2...
	I1018 14:14:59.575871   93199 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:14:59.576098   93199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	W1018 14:14:59.576241   93199 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21409-89690/.minikube/config/config.json: open /home/jenkins/minikube-integration/21409-89690/.minikube/config/config.json: no such file or directory
	I1018 14:14:59.576753   93199 out.go:368] Setting JSON to true
	I1018 14:14:59.577623   93199 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7051,"bootTime":1760789849,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:14:59.577723   93199 start.go:141] virtualization: kvm guest
	I1018 14:14:59.580065   93199 out.go:99] [download-only-498093] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1018 14:14:59.580210   93199 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball: no such file or directory
	I1018 14:14:59.580261   93199 notify.go:220] Checking for updates...
	I1018 14:14:59.581586   93199 out.go:171] MINIKUBE_LOCATION=21409
	I1018 14:14:59.583150   93199 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:14:59.584466   93199 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 14:14:59.585822   93199 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 14:14:59.587084   93199 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1018 14:14:59.589310   93199 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 14:14:59.589560   93199 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:14:59.612973   93199 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 14:14:59.613062   93199 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:15:00.027462   93199 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 14:15:00.017632588 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:15:00.027617   93199 docker.go:318] overlay module found
	I1018 14:15:00.029274   93199 out.go:99] Using the docker driver based on user configuration
	I1018 14:15:00.029322   93199 start.go:305] selected driver: docker
	I1018 14:15:00.029331   93199 start.go:925] validating driver "docker" against <nil>
	I1018 14:15:00.029424   93199 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:15:00.087649   93199 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 14:15:00.078700436 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:15:00.087873   93199 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 14:15:00.088594   93199 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1018 14:15:00.088825   93199 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 14:15:00.090354   93199 out.go:171] Using Docker driver with root privileges
	I1018 14:15:00.091434   93199 cni.go:84] Creating CNI manager for ""
	I1018 14:15:00.091499   93199 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 14:15:00.091519   93199 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 14:15:00.091590   93199 start.go:349] cluster config:
	{Name:download-only-498093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-498093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:15:00.092822   93199 out.go:99] Starting "download-only-498093" primary control-plane node in "download-only-498093" cluster
	I1018 14:15:00.092846   93199 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 14:15:00.094149   93199 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1018 14:15:00.094180   93199 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 14:15:00.094302   93199 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 14:15:00.110247   93199 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 14:15:00.110462   93199 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 14:15:00.110561   93199 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 14:15:00.113144   93199 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1018 14:15:00.113168   93199 cache.go:58] Caching tarball of preloaded images
	I1018 14:15:00.113295   93199 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 14:15:00.114999   93199 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1018 14:15:00.115019   93199 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1018 14:15:00.143855   93199 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1018 14:15:00.143990   93199 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1018 14:15:03.057555   93199 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1018 14:15:03.057956   93199 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/download-only-498093/config.json ...
	I1018 14:15:03.057993   93199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/download-only-498093/config.json: {Name:mk356b168344af4a7c1a5e060db3fade5e4d4e0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 14:15:03.058169   93199 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 14:15:03.058393   93199 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21409-89690/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-498093 host does not exist
	  To start a cluster, run: "minikube start -p download-only-498093"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-498093
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-142592 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-142592 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.465775383s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1018 14:15:08.869391   93187 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1018 14:15:08.869454   93187 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-89690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-142592
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-142592: exit status 85 (63.228093ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-498093 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-498093 │ jenkins │ v1.37.0 │ 18 Oct 25 14:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │ 18 Oct 25 14:15 UTC │
	│ delete  │ -p download-only-498093                                                                                                                                                   │ download-only-498093 │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │ 18 Oct 25 14:15 UTC │
	│ start   │ -o=json --download-only -p download-only-142592 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-142592 │ jenkins │ v1.37.0 │ 18 Oct 25 14:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 14:15:05
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 14:15:05.446772   93557 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:15:05.447067   93557 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:15:05.447077   93557 out.go:374] Setting ErrFile to fd 2...
	I1018 14:15:05.447082   93557 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:15:05.447268   93557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:15:05.447738   93557 out.go:368] Setting JSON to true
	I1018 14:15:05.448664   93557 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7056,"bootTime":1760789849,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:15:05.448752   93557 start.go:141] virtualization: kvm guest
	I1018 14:15:05.450433   93557 out.go:99] [download-only-142592] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 14:15:05.450618   93557 notify.go:220] Checking for updates...
	I1018 14:15:05.451657   93557 out.go:171] MINIKUBE_LOCATION=21409
	I1018 14:15:05.452752   93557 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:15:05.453996   93557 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 14:15:05.455109   93557 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 14:15:05.456204   93557 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1018 14:15:05.458608   93557 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 14:15:05.458836   93557 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:15:05.480547   93557 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 14:15:05.480652   93557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:15:05.537694   93557 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:54 SystemTime:2025-10-18 14:15:05.528205794 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:15:05.537804   93557 docker.go:318] overlay module found
	I1018 14:15:05.539690   93557 out.go:99] Using the docker driver based on user configuration
	I1018 14:15:05.539722   93557 start.go:305] selected driver: docker
	I1018 14:15:05.539728   93557 start.go:925] validating driver "docker" against <nil>
	I1018 14:15:05.539806   93557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:15:05.597689   93557 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:54 SystemTime:2025-10-18 14:15:05.587098326 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:15:05.597901   93557 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 14:15:05.598436   93557 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1018 14:15:05.598614   93557 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 14:15:05.600583   93557 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-142592 host does not exist
	  To start a cluster, run: "minikube start -p download-only-142592"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-142592
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.39s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-735106 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-735106" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-735106
--- PASS: TestDownloadOnlyKic (0.39s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I1018 14:15:09.943589   93187 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-035412 --alsologtostderr --binary-mirror http://127.0.0.1:38181 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-035412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-035412
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (94.56s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-800676 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-800676 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m32.070498065s)
helpers_test.go:175: Cleaning up "offline-crio-800676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-800676
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-800676: (2.487184804s)
--- PASS: TestOffline (94.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-493618
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-493618: exit status 85 (54.293072ms)

                                                
                                                
-- stdout --
	* Profile "addons-493618" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-493618"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-493618
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-493618: exit status 85 (53.616233ms)

                                                
                                                
-- stdout --
	* Profile "addons-493618" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-493618"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (166.99s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-493618 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-493618 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m46.989348207s)
--- PASS: TestAddons/Setup (166.99s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-493618 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-493618 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-493618 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-493618 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7b11849d-f2f9-4652-b676-2eed786a2a6c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7b11849d-f2f9-4652-b676-2eed786a2a6c] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004531955s
addons_test.go:694: (dbg) Run:  kubectl --context addons-493618 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-493618 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-493618 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.8s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-493618
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-493618: (16.533224565s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-493618
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-493618
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-493618
--- PASS: TestAddons/StoppedEnableDisable (16.80s)

                                                
                                    
x
+
TestCertOptions (26.86s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-648086 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1018 15:02:53.418188   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-648086 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.689416318s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-648086 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-648086 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-648086 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-648086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-648086
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-648086: (2.491949906s)
--- PASS: TestCertOptions (26.86s)

                                                
                                    
x
+
TestCertExpiration (212.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-327346 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-327346 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (23.322705356s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-327346 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-327346 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.342419353s)
helpers_test.go:175: Cleaning up "cert-expiration-327346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-327346
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-327346: (2.431447974s)
--- PASS: TestCertExpiration (212.10s)

                                                
                                    
x
+
TestForceSystemdFlag (23.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-536692 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-536692 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.082762242s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-536692 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-536692" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-536692
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-536692: (2.409763839s)
--- PASS: TestForceSystemdFlag (23.77s)

                                                
                                    
x
+
TestForceSystemdEnv (26.86s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-680592 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-680592 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.174807579s)
helpers_test.go:175: Cleaning up "force-systemd-env-680592" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-680592
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-680592: (4.680878164s)
--- PASS: TestForceSystemdEnv (26.86s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.3s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1018 15:01:22.036142   93187 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1018 15:01:22.036323   93187 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2970133016/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 15:01:22.070165   93187 install.go:163] /tmp/TestKVMDriverInstallOrUpdate2970133016/001/docker-machine-driver-kvm2 version is 1.1.1
W1018 15:01:22.070211   93187 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1018 15:01:22.070329   93187 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1018 15:01:22.070381   93187 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2970133016/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.30s)

                                                
                                    
x
+
TestErrorSpam/setup (24.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-542773 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-542773 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-542773 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-542773 --driver=docker  --container-runtime=crio: (24.123569686s)
--- PASS: TestErrorSpam/setup (24.12s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 status
--- PASS: TestErrorSpam/status (0.91s)

                                                
                                    
x
+
TestErrorSpam/pause (6.41s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 pause: exit status 80 (2.077294123s)

                                                
                                                
-- stdout --
	* Pausing node nospam-542773 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:25:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 pause: exit status 80 (1.984163061s)

                                                
                                                
-- stdout --
	* Pausing node nospam-542773 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:25:51Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 pause: exit status 80 (2.35271343s)

                                                
                                                
-- stdout --
	* Pausing node nospam-542773 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:25:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.41s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 unpause: exit status 80 (2.043749149s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-542773 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:25:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 unpause: exit status 80 (2.087315927s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-542773 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:25:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 unpause: exit status 80 (2.292781422s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-542773 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T14:26:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.42s)

                                                
                                    
x
+
TestErrorSpam/stop (8.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 stop: (7.881858838s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542773 --log_dir /tmp/nospam-542773 stop
--- PASS: TestErrorSpam/stop (8.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21409-89690/.minikube/files/etc/test/nested/copy/93187/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.63s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-823635 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-823635 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (38.631089421s)
--- PASS: TestFunctional/serial/StartWithProxy (38.63s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.71s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1018 14:26:51.861116   93187 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-823635 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-823635 --alsologtostderr -v=8: (6.708306606s)
functional_test.go:678: soft start took 6.709153754s for "functional-823635" cluster.
I1018 14:26:58.569826   93187 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.71s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-823635 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-823635 /tmp/TestFunctionalserialCacheCmdcacheadd_local299358404/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 cache add minikube-local-cache-test:functional-823635
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-823635 cache add minikube-local-cache-test:functional-823635: (1.264293197s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 cache delete minikube-local-cache-test:functional-823635
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-823635
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-823635 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (273.071041ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 kubectl -- --context functional-823635 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-823635 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-823635 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-823635 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.905807106s)
functional_test.go:776: restart took 41.905953929s for "functional-823635" cluster.
I1018 14:27:46.942124   93187 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (41.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-823635 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-823635 logs: (1.21863553s)
--- PASS: TestFunctional/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 logs --file /tmp/TestFunctionalserialLogsFileCmd32637203/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-823635 logs --file /tmp/TestFunctionalserialLogsFileCmd32637203/001/logs.txt: (1.243579398s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-823635 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-823635
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-823635: exit status 115 (336.821786ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31895 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-823635 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-823635 config get cpus: exit status 14 (60.226014ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-823635 config get cpus: exit status 14 (52.848145ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-823635 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-823635 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (152.374871ms)

                                                
                                                
-- stdout --
	* [functional-823635] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:33:19.683820  137489 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:33:19.684109  137489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:33:19.684121  137489 out.go:374] Setting ErrFile to fd 2...
	I1018 14:33:19.684126  137489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:33:19.684365  137489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:33:19.684854  137489 out.go:368] Setting JSON to false
	I1018 14:33:19.685781  137489 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":8151,"bootTime":1760789849,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:33:19.685891  137489 start.go:141] virtualization: kvm guest
	I1018 14:33:19.687685  137489 out.go:179] * [functional-823635] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 14:33:19.689249  137489 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 14:33:19.689274  137489 notify.go:220] Checking for updates...
	I1018 14:33:19.691650  137489 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:33:19.693241  137489 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 14:33:19.694432  137489 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 14:33:19.695669  137489 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 14:33:19.696927  137489 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 14:33:19.698562  137489 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:33:19.699111  137489 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:33:19.723487  137489 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 14:33:19.723654  137489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:33:19.781110  137489 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-18 14:33:19.771610665 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:33:19.781224  137489 docker.go:318] overlay module found
	I1018 14:33:19.782839  137489 out.go:179] * Using the docker driver based on existing profile
	I1018 14:33:19.783890  137489 start.go:305] selected driver: docker
	I1018 14:33:19.783907  137489 start.go:925] validating driver "docker" against &{Name:functional-823635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-823635 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:33:19.784045  137489 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 14:33:19.785838  137489 out.go:203] 
	W1018 14:33:19.786993  137489 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1018 14:33:19.788038  137489 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-823635 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-823635 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-823635 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (153.057553ms)

                                                
                                                
-- stdout --
	* [functional-823635] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:33:12.034701  135948 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:33:12.034841  135948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:33:12.034853  135948 out.go:374] Setting ErrFile to fd 2...
	I1018 14:33:12.034859  135948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:33:12.035201  135948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:33:12.035660  135948 out.go:368] Setting JSON to false
	I1018 14:33:12.036642  135948 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":8143,"bootTime":1760789849,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 14:33:12.036738  135948 start.go:141] virtualization: kvm guest
	I1018 14:33:12.038640  135948 out.go:179] * [functional-823635] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1018 14:33:12.039899  135948 notify.go:220] Checking for updates...
	I1018 14:33:12.039946  135948 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 14:33:12.041179  135948 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 14:33:12.042387  135948 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 14:33:12.043666  135948 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 14:33:12.044706  135948 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 14:33:12.045871  135948 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 14:33:12.047588  135948 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:33:12.048337  135948 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 14:33:12.072146  135948 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 14:33:12.072257  135948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:33:12.130074  135948 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-18 14:33:12.119744309 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:33:12.130226  135948 docker.go:318] overlay module found
	I1018 14:33:12.131936  135948 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1018 14:33:12.133139  135948 start.go:305] selected driver: docker
	I1018 14:33:12.133157  135948 start.go:925] validating driver "docker" against &{Name:functional-823635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-823635 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 14:33:12.133232  135948 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 14:33:12.134810  135948 out.go:203] 
	W1018 14:33:12.136114  135948 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 14:33:12.137390  135948 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh -n functional-823635 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 cp functional-823635:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd616516983/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh -n functional-823635 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh -n functional-823635 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (17.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-823635 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-8kx2d" [2ed8bf3d-0bee-4620-87f1-38ddd5c1b93a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-8kx2d" [2ed8bf3d-0bee-4620-87f1-38ddd5c1b93a] Running
E1018 14:28:03.486422   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 13.003524096s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-823635 exec mysql-5bb876957f-8kx2d -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-823635 exec mysql-5bb876957f-8kx2d -- mysql -ppassword -e "show databases;": exit status 1 (131.282382ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 14:28:07.104190   93187 retry.go:31] will retry after 883.996018ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-823635 exec mysql-5bb876957f-8kx2d -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-823635 exec mysql-5bb876957f-8kx2d -- mysql -ppassword -e "show databases;": exit status 1 (90.899791ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 14:28:08.080208   93187 retry.go:31] will retry after 1.298816812s: exit status 1
E1018 14:28:08.608249   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-823635 exec mysql-5bb876957f-8kx2d -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-823635 exec mysql-5bb876957f-8kx2d -- mysql -ppassword -e "show databases;": exit status 1 (90.332024ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 14:28:09.470127   93187 retry.go:31] will retry after 1.361656815s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-823635 exec mysql-5bb876957f-8kx2d -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (17.16s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/93187/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "sudo cat /etc/test/nested/copy/93187/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/93187.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "sudo cat /etc/ssl/certs/93187.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/93187.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "sudo cat /usr/share/ca-certificates/93187.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/931872.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "sudo cat /etc/ssl/certs/931872.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/931872.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "sudo cat /usr/share/ca-certificates/931872.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-823635 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-823635 ssh "sudo systemctl is-active docker": exit status 1 (263.720785ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-823635 ssh "sudo systemctl is-active containerd": exit status 1 (265.352511ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-823635 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-823635 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-823635 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-823635 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 130240: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (73.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-823635 /tmp/TestFunctionalparallelMountCmdany-port1639409505/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760797675783715635" to /tmp/TestFunctionalparallelMountCmdany-port1639409505/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760797675783715635" to /tmp/TestFunctionalparallelMountCmdany-port1639409505/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760797675783715635" to /tmp/TestFunctionalparallelMountCmdany-port1639409505/001/test-1760797675783715635
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-823635 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (413.785313ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 14:27:56.198391   93187 retry.go:31] will retry after 609.578329ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 18 14:27 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 18 14:27 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 18 14:27 test-1760797675783715635
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh cat /mount-9p/test-1760797675783715635
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-823635 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [2e4b863d-7736-4b58-ae41-a70c4da929c2] Pending
E1018 14:27:58.354789   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:27:58.361283   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:27:58.372751   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:27:58.394221   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:27:58.436152   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:27:58.517643   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:27:58.679128   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [2e4b863d-7736-4b58-ae41-a70c4da929c2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E1018 14:27:59.001364   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [2e4b863d-7736-4b58-ae41-a70c4da929c2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [2e4b863d-7736-4b58-ae41-a70c4da929c2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 1m10.003563096s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-823635 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-823635 /tmp/TestFunctionalparallelMountCmdany-port1639409505/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (73.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-823635 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-823635 /tmp/TestFunctionalparallelMountCmdspecific-port3358236204/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-823635 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (271.793945ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 14:29:09.203103   93187 retry.go:31] will retry after 441.750599ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-823635 /tmp/TestFunctionalparallelMountCmdspecific-port3358236204/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-823635 ssh "sudo umount -f /mount-9p": exit status 1 (259.778344ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-823635 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-823635 /tmp/TestFunctionalparallelMountCmdspecific-port3358236204/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-823635 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2868645048/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-823635 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2868645048/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-823635 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2868645048/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-823635 ssh "findmnt -T" /mount1: exit status 1 (332.959365ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 14:29:10.980485   93187 retry.go:31] will retry after 625.666099ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-823635 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-823635 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2868645048/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-823635 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2868645048/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-823635 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2868645048/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-823635 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "330.33446ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "52.02096ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "333.504083ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "49.495861ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-823635 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-823635 image ls --format short --alsologtostderr:
I1018 14:34:03.450957  139353 out.go:360] Setting OutFile to fd 1 ...
I1018 14:34:03.451228  139353 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:03.451239  139353 out.go:374] Setting ErrFile to fd 2...
I1018 14:34:03.451243  139353 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:03.451499  139353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
I1018 14:34:03.452147  139353 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:03.452253  139353 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:03.452652  139353 cli_runner.go:164] Run: docker container inspect functional-823635 --format={{.State.Status}}
I1018 14:34:03.471603  139353 ssh_runner.go:195] Run: systemctl --version
I1018 14:34:03.471661  139353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-823635
I1018 14:34:03.489063  139353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/functional-823635/id_rsa Username:docker}
I1018 14:34:03.584742  139353 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-823635 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-823635 image ls --format table --alsologtostderr:
I1018 14:34:03.871869  139459 out.go:360] Setting OutFile to fd 1 ...
I1018 14:34:03.872154  139459 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:03.872165  139459 out.go:374] Setting ErrFile to fd 2...
I1018 14:34:03.872171  139459 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:03.872402  139459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
I1018 14:34:03.873013  139459 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:03.873136  139459 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:03.873525  139459 cli_runner.go:164] Run: docker container inspect functional-823635 --format={{.State.Status}}
I1018 14:34:03.892081  139459 ssh_runner.go:195] Run: systemctl --version
I1018 14:34:03.892147  139459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-823635
I1018 14:34:03.909835  139459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/functional-823635/id_rsa Username:docker}
I1018 14:34:04.007817  139459 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-823635 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"i
d":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e0
6","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03
e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba6833
0079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-823635 image ls --format json --alsologtostderr:
I1018 14:34:03.662083  139404 out.go:360] Setting OutFile to fd 1 ...
I1018 14:34:03.662316  139404 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:03.662324  139404 out.go:374] Setting ErrFile to fd 2...
I1018 14:34:03.662328  139404 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:03.662497  139404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
I1018 14:34:03.663067  139404 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:03.663150  139404 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:03.663501  139404 cli_runner.go:164] Run: docker container inspect functional-823635 --format={{.State.Status}}
I1018 14:34:03.681543  139404 ssh_runner.go:195] Run: systemctl --version
I1018 14:34:03.681591  139404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-823635
I1018 14:34:03.698339  139404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/functional-823635/id_rsa Username:docker}
I1018 14:34:03.793769  139404 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-823635 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-823635 image ls --format yaml --alsologtostderr:
I1018 14:34:04.087634  139509 out.go:360] Setting OutFile to fd 1 ...
I1018 14:34:04.087946  139509 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:04.087957  139509 out.go:374] Setting ErrFile to fd 2...
I1018 14:34:04.087961  139509 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:04.088146  139509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
I1018 14:34:04.088709  139509 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:04.088797  139509 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:04.089208  139509 cli_runner.go:164] Run: docker container inspect functional-823635 --format={{.State.Status}}
I1018 14:34:04.107546  139509 ssh_runner.go:195] Run: systemctl --version
I1018 14:34:04.107595  139509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-823635
I1018 14:34:04.124394  139509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/functional-823635/id_rsa Username:docker}
I1018 14:34:04.220018  139509 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-823635 ssh pgrep buildkitd: exit status 1 (257.856657ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 image build -t localhost/my-image:functional-823635 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-823635 image build -t localhost/my-image:functional-823635 testdata/build --alsologtostderr: (3.150517702s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-823635 image build -t localhost/my-image:functional-823635 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 964ff077031
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-823635
--> bc273194e2d
Successfully tagged localhost/my-image:functional-823635
bc273194e2d68ca134e6582aa7e092ecb4ca08b27a104f23bf6047c06c2df411
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-823635 image build -t localhost/my-image:functional-823635 testdata/build --alsologtostderr:
I1018 14:34:04.556227  139686 out.go:360] Setting OutFile to fd 1 ...
I1018 14:34:04.556483  139686 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:04.556494  139686 out.go:374] Setting ErrFile to fd 2...
I1018 14:34:04.556500  139686 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 14:34:04.556759  139686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
I1018 14:34:04.557400  139686 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:04.558141  139686 config.go:182] Loaded profile config "functional-823635": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 14:34:04.558546  139686 cli_runner.go:164] Run: docker container inspect functional-823635 --format={{.State.Status}}
I1018 14:34:04.576207  139686 ssh_runner.go:195] Run: systemctl --version
I1018 14:34:04.576265  139686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-823635
I1018 14:34:04.594135  139686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/functional-823635/id_rsa Username:docker}
I1018 14:34:04.689847  139686 build_images.go:161] Building image from path: /tmp/build.1490388800.tar
I1018 14:34:04.689909  139686 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1018 14:34:04.698356  139686 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1490388800.tar
I1018 14:34:04.702120  139686 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1490388800.tar: stat -c "%s %y" /var/lib/minikube/build/build.1490388800.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1490388800.tar': No such file or directory
I1018 14:34:04.702152  139686 ssh_runner.go:362] scp /tmp/build.1490388800.tar --> /var/lib/minikube/build/build.1490388800.tar (3072 bytes)
I1018 14:34:04.720657  139686 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1490388800
I1018 14:34:04.728998  139686 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1490388800 -xf /var/lib/minikube/build/build.1490388800.tar
I1018 14:34:04.737489  139686 crio.go:315] Building image: /var/lib/minikube/build/build.1490388800
I1018 14:34:04.737573  139686 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-823635 /var/lib/minikube/build/build.1490388800 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1018 14:34:07.639177  139686 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-823635 /var/lib/minikube/build/build.1490388800 --cgroup-manager=cgroupfs: (2.901573452s)
I1018 14:34:07.639286  139686 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1490388800
I1018 14:34:07.647858  139686 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1490388800.tar
I1018 14:34:07.656078  139686 build_images.go:217] Built localhost/my-image:functional-823635 from /tmp/build.1490388800.tar
I1018 14:34:07.656120  139686 build_images.go:133] succeeded building to: functional-823635
I1018 14:34:07.656127  139686 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 image ls
E1018 14:37:58.354425   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.506075067s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-823635
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 image rm kicbase/echo-server:functional-823635 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-823635 service list: (1.688734528s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-823635 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-823635 service list -o json: (1.687930082s)
functional_test.go:1504: Took "1.688056534s" to run "out/minikube-linux-amd64 -p functional-823635 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-823635
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-823635
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-823635
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (150.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-899706 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m29.817734395s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (150.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-899706 kubectl -- rollout status deployment/busybox: (3.931864046s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- exec busybox-7b57f96db7-9tgff -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- exec busybox-7b57f96db7-pmq95 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- exec busybox-7b57f96db7-qwrpl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- exec busybox-7b57f96db7-9tgff -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- exec busybox-7b57f96db7-pmq95 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- exec busybox-7b57f96db7-qwrpl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- exec busybox-7b57f96db7-9tgff -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- exec busybox-7b57f96db7-pmq95 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- exec busybox-7b57f96db7-qwrpl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- exec busybox-7b57f96db7-9tgff -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- exec busybox-7b57f96db7-9tgff -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- exec busybox-7b57f96db7-pmq95 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- exec busybox-7b57f96db7-pmq95 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- exec busybox-7b57f96db7-qwrpl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 kubectl -- exec busybox-7b57f96db7-qwrpl -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-899706 node add --alsologtostderr -v 5: (23.738433971s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-899706 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp testdata/cp-test.txt ha-899706:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp ha-899706:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3223588954/001/cp-test_ha-899706.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp ha-899706:/home/docker/cp-test.txt ha-899706-m02:/home/docker/cp-test_ha-899706_ha-899706-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m02 "sudo cat /home/docker/cp-test_ha-899706_ha-899706-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp ha-899706:/home/docker/cp-test.txt ha-899706-m03:/home/docker/cp-test_ha-899706_ha-899706-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m03 "sudo cat /home/docker/cp-test_ha-899706_ha-899706-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp ha-899706:/home/docker/cp-test.txt ha-899706-m04:/home/docker/cp-test_ha-899706_ha-899706-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m04 "sudo cat /home/docker/cp-test_ha-899706_ha-899706-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp testdata/cp-test.txt ha-899706-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp ha-899706-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3223588954/001/cp-test_ha-899706-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp ha-899706-m02:/home/docker/cp-test.txt ha-899706:/home/docker/cp-test_ha-899706-m02_ha-899706.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706 "sudo cat /home/docker/cp-test_ha-899706-m02_ha-899706.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp ha-899706-m02:/home/docker/cp-test.txt ha-899706-m03:/home/docker/cp-test_ha-899706-m02_ha-899706-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m03 "sudo cat /home/docker/cp-test_ha-899706-m02_ha-899706-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp ha-899706-m02:/home/docker/cp-test.txt ha-899706-m04:/home/docker/cp-test_ha-899706-m02_ha-899706-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m04 "sudo cat /home/docker/cp-test_ha-899706-m02_ha-899706-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp testdata/cp-test.txt ha-899706-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp ha-899706-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3223588954/001/cp-test_ha-899706-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp ha-899706-m03:/home/docker/cp-test.txt ha-899706:/home/docker/cp-test_ha-899706-m03_ha-899706.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706 "sudo cat /home/docker/cp-test_ha-899706-m03_ha-899706.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp ha-899706-m03:/home/docker/cp-test.txt ha-899706-m02:/home/docker/cp-test_ha-899706-m03_ha-899706-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m02 "sudo cat /home/docker/cp-test_ha-899706-m03_ha-899706-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp ha-899706-m03:/home/docker/cp-test.txt ha-899706-m04:/home/docker/cp-test_ha-899706-m03_ha-899706-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m04 "sudo cat /home/docker/cp-test_ha-899706-m03_ha-899706-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp testdata/cp-test.txt ha-899706-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp ha-899706-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3223588954/001/cp-test_ha-899706-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp ha-899706-m04:/home/docker/cp-test.txt ha-899706:/home/docker/cp-test_ha-899706-m04_ha-899706.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706 "sudo cat /home/docker/cp-test_ha-899706-m04_ha-899706.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp ha-899706-m04:/home/docker/cp-test.txt ha-899706-m02:/home/docker/cp-test_ha-899706-m04_ha-899706-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m02 "sudo cat /home/docker/cp-test_ha-899706-m04_ha-899706-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 cp ha-899706-m04:/home/docker/cp-test.txt ha-899706-m03:/home/docker/cp-test_ha-899706-m04_ha-899706-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 ssh -n ha-899706-m03 "sudo cat /home/docker/cp-test_ha-899706-m04_ha-899706-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 node stop m02 --alsologtostderr -v 5
E1018 14:42:53.422170   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:42:53.428581   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:42:53.439979   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:42:53.461415   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:42:53.502893   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:42:53.584338   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:42:53.745893   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:42:54.067576   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:42:54.709139   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:42:55.991133   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-899706 node stop m02 --alsologtostderr -v 5: (13.565807908s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-899706 status --alsologtostderr -v 5: exit status 7 (683.498871ms)

                                                
                                                
-- stdout --
	ha-899706
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-899706-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-899706-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-899706-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:42:56.086183  163688 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:42:56.086606  163688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:42:56.086615  163688 out.go:374] Setting ErrFile to fd 2...
	I1018 14:42:56.086620  163688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:42:56.086830  163688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:42:56.087022  163688 out.go:368] Setting JSON to false
	I1018 14:42:56.087049  163688 mustload.go:65] Loading cluster: ha-899706
	I1018 14:42:56.087096  163688 notify.go:220] Checking for updates...
	I1018 14:42:56.087435  163688 config.go:182] Loaded profile config "ha-899706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:42:56.087450  163688 status.go:174] checking status of ha-899706 ...
	I1018 14:42:56.087832  163688 cli_runner.go:164] Run: docker container inspect ha-899706 --format={{.State.Status}}
	I1018 14:42:56.106040  163688 status.go:371] ha-899706 host status = "Running" (err=<nil>)
	I1018 14:42:56.106076  163688 host.go:66] Checking if "ha-899706" exists ...
	I1018 14:42:56.106333  163688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-899706
	I1018 14:42:56.124575  163688 host.go:66] Checking if "ha-899706" exists ...
	I1018 14:42:56.124816  163688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 14:42:56.124850  163688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-899706
	I1018 14:42:56.142649  163688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/ha-899706/id_rsa Username:docker}
	I1018 14:42:56.238575  163688 ssh_runner.go:195] Run: systemctl --version
	I1018 14:42:56.244946  163688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 14:42:56.257250  163688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:42:56.315375  163688 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-18 14:42:56.304602902 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:42:56.315881  163688 kubeconfig.go:125] found "ha-899706" server: "https://192.168.49.254:8443"
	I1018 14:42:56.315936  163688 api_server.go:166] Checking apiserver status ...
	I1018 14:42:56.315980  163688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 14:42:56.328215  163688 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	W1018 14:42:56.336867  163688 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 14:42:56.336951  163688 ssh_runner.go:195] Run: ls
	I1018 14:42:56.340878  163688 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 14:42:56.345167  163688 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 14:42:56.345194  163688 status.go:463] ha-899706 apiserver status = Running (err=<nil>)
	I1018 14:42:56.345207  163688 status.go:176] ha-899706 status: &{Name:ha-899706 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 14:42:56.345227  163688 status.go:174] checking status of ha-899706-m02 ...
	I1018 14:42:56.345502  163688 cli_runner.go:164] Run: docker container inspect ha-899706-m02 --format={{.State.Status}}
	I1018 14:42:56.362590  163688 status.go:371] ha-899706-m02 host status = "Stopped" (err=<nil>)
	I1018 14:42:56.362611  163688 status.go:384] host is not running, skipping remaining checks
	I1018 14:42:56.362619  163688 status.go:176] ha-899706-m02 status: &{Name:ha-899706-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 14:42:56.362642  163688 status.go:174] checking status of ha-899706-m03 ...
	I1018 14:42:56.362903  163688 cli_runner.go:164] Run: docker container inspect ha-899706-m03 --format={{.State.Status}}
	I1018 14:42:56.380407  163688 status.go:371] ha-899706-m03 host status = "Running" (err=<nil>)
	I1018 14:42:56.380450  163688 host.go:66] Checking if "ha-899706-m03" exists ...
	I1018 14:42:56.380753  163688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-899706-m03
	I1018 14:42:56.398408  163688 host.go:66] Checking if "ha-899706-m03" exists ...
	I1018 14:42:56.398672  163688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 14:42:56.398731  163688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-899706-m03
	I1018 14:42:56.417228  163688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/ha-899706-m03/id_rsa Username:docker}
	I1018 14:42:56.512791  163688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 14:42:56.525848  163688 kubeconfig.go:125] found "ha-899706" server: "https://192.168.49.254:8443"
	I1018 14:42:56.525878  163688 api_server.go:166] Checking apiserver status ...
	I1018 14:42:56.525926  163688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 14:42:56.537252  163688 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1185/cgroup
	W1018 14:42:56.546268  163688 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1185/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 14:42:56.546332  163688 ssh_runner.go:195] Run: ls
	I1018 14:42:56.550155  163688 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 14:42:56.554354  163688 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 14:42:56.554376  163688 status.go:463] ha-899706-m03 apiserver status = Running (err=<nil>)
	I1018 14:42:56.554384  163688 status.go:176] ha-899706-m03 status: &{Name:ha-899706-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 14:42:56.554399  163688 status.go:174] checking status of ha-899706-m04 ...
	I1018 14:42:56.554626  163688 cli_runner.go:164] Run: docker container inspect ha-899706-m04 --format={{.State.Status}}
	I1018 14:42:56.572710  163688 status.go:371] ha-899706-m04 host status = "Running" (err=<nil>)
	I1018 14:42:56.572735  163688 host.go:66] Checking if "ha-899706-m04" exists ...
	I1018 14:42:56.572997  163688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-899706-m04
	I1018 14:42:56.590469  163688 host.go:66] Checking if "ha-899706-m04" exists ...
	I1018 14:42:56.590824  163688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 14:42:56.590876  163688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-899706-m04
	I1018 14:42:56.609084  163688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/ha-899706-m04/id_rsa Username:docker}
	I1018 14:42:56.704637  163688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 14:42:56.718817  163688 status.go:176] ha-899706-m04 status: &{Name:ha-899706-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 node start m02 --alsologtostderr -v 5
E1018 14:42:58.354278   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:42:58.552848   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:43:03.675015   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-899706 node start m02 --alsologtostderr -v 5: (8.098644352s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (100.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 stop --alsologtostderr -v 5
E1018 14:43:13.917147   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:43:34.399113   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-899706 stop --alsologtostderr -v 5: (48.713692099s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 start --wait true --alsologtostderr -v 5
E1018 14:44:15.360460   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:44:21.419167   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-899706 start --wait true --alsologtostderr -v 5: (51.900325453s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (100.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-899706 node delete m03 --alsologtostderr -v 5: (9.70369687s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 stop --alsologtostderr -v 5
E1018 14:45:37.284748   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-899706 stop --alsologtostderr -v 5: (41.280603324s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-899706 status --alsologtostderr -v 5: exit status 7 (104.086316ms)

                                                
                                                
-- stdout --
	ha-899706
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-899706-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-899706-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:45:40.532234  177648 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:45:40.532482  177648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:45:40.532490  177648 out.go:374] Setting ErrFile to fd 2...
	I1018 14:45:40.532494  177648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:45:40.532683  177648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:45:40.532866  177648 out.go:368] Setting JSON to false
	I1018 14:45:40.532892  177648 mustload.go:65] Loading cluster: ha-899706
	I1018 14:45:40.532954  177648 notify.go:220] Checking for updates...
	I1018 14:45:40.533305  177648 config.go:182] Loaded profile config "ha-899706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:45:40.533326  177648 status.go:174] checking status of ha-899706 ...
	I1018 14:45:40.533724  177648 cli_runner.go:164] Run: docker container inspect ha-899706 --format={{.State.Status}}
	I1018 14:45:40.552120  177648 status.go:371] ha-899706 host status = "Stopped" (err=<nil>)
	I1018 14:45:40.552197  177648 status.go:384] host is not running, skipping remaining checks
	I1018 14:45:40.552208  177648 status.go:176] ha-899706 status: &{Name:ha-899706 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 14:45:40.552249  177648 status.go:174] checking status of ha-899706-m02 ...
	I1018 14:45:40.552504  177648 cli_runner.go:164] Run: docker container inspect ha-899706-m02 --format={{.State.Status}}
	I1018 14:45:40.570530  177648 status.go:371] ha-899706-m02 host status = "Stopped" (err=<nil>)
	I1018 14:45:40.570549  177648 status.go:384] host is not running, skipping remaining checks
	I1018 14:45:40.570555  177648 status.go:176] ha-899706-m02 status: &{Name:ha-899706-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 14:45:40.570572  177648 status.go:174] checking status of ha-899706-m04 ...
	I1018 14:45:40.570803  177648 cli_runner.go:164] Run: docker container inspect ha-899706-m04 --format={{.State.Status}}
	I1018 14:45:40.587407  177648 status.go:371] ha-899706-m04 host status = "Stopped" (err=<nil>)
	I1018 14:45:40.587435  177648 status.go:384] host is not running, skipping remaining checks
	I1018 14:45:40.587444  177648 status.go:176] ha-899706-m04 status: &{Name:ha-899706-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (51.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-899706 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (50.656045476s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (51.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-899706 node add --control-plane --alsologtostderr -v 5: (34.814798859s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-899706 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-822582 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1018 14:47:53.421867   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-822582 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (39.480860617s)
--- PASS: TestJSONOutput/start/Command (39.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-822582 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-822582 --output=json --user=testUser: (7.95346411s)
--- PASS: TestJSONOutput/stop/Command (7.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-803467 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-803467 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (67.158031ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"212ee13b-09cf-4b68-b403-fa9368093aa4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-803467] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"90ec4125-a296-4518-9352-88259be35433","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21409"}}
	{"specversion":"1.0","id":"1dae7c95-0d12-48fd-ae4f-eda29eaa54fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4d00323c-6959-4536-a0f1-f67dfc09f07c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig"}}
	{"specversion":"1.0","id":"10ba1c8c-e32e-4106-945e-c71b69c5c82e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube"}}
	{"specversion":"1.0","id":"448b8b45-b98b-49c6-89e3-58fadb0b4850","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"17585c96-6073-4583-9808-452dff183ce9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2c755a19-0154-4ea0-b8b3-264b8888191d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-803467" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-803467
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.78s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-681672 --network=
E1018 14:48:21.128092   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-681672 --network=: (25.589010723s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-681672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-681672
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-681672: (2.175585203s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.78s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.54s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-519265 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-519265 --network=bridge: (22.52708729s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-519265" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-519265
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-519265: (1.989788844s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.54s)

                                                
                                    
x
+
TestKicExistingNetwork (25.06s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1018 14:49:07.116709   93187 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1018 14:49:07.131858   93187 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1018 14:49:07.131961   93187 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1018 14:49:07.131995   93187 cli_runner.go:164] Run: docker network inspect existing-network
W1018 14:49:07.147946   93187 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1018 14:49:07.147975   93187 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1018 14:49:07.147987   93187 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1018 14:49:07.148142   93187 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1018 14:49:07.164960   93187 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-67ded9675d49 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:eb:89:76:0f:a6} reservation:<nil>}
I1018 14:49:07.165391   93187 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00034a470}
I1018 14:49:07.165422   93187 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1018 14:49:07.165465   93187 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1018 14:49:07.219376   93187 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-402702 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-402702 --network=existing-network: (22.897145959s)
helpers_test.go:175: Cleaning up "existing-network-402702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-402702
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-402702: (2.024452554s)
I1018 14:49:32.157783   93187 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.06s)

                                                
                                    
x
+
TestKicCustomSubnet (27.78s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-664408 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-664408 --subnet=192.168.60.0/24: (25.650183013s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-664408 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-664408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-664408
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-664408: (2.109025523s)
--- PASS: TestKicCustomSubnet (27.78s)

                                                
                                    
x
+
TestKicStaticIP (28.06s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-206329 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-206329 --static-ip=192.168.200.200: (25.767738565s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-206329 ip
helpers_test.go:175: Cleaning up "static-ip-206329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-206329
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-206329: (2.156508196s)
--- PASS: TestKicStaticIP (28.06s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (50.46s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-039946 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-039946 --driver=docker  --container-runtime=crio: (23.313865894s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-042961 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-042961 --driver=docker  --container-runtime=crio: (21.142712926s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-039946
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-042961
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-042961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-042961
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-042961: (2.3866229s)
helpers_test.go:175: Cleaning up "first-039946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-039946
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-039946: (2.402240247s)
--- PASS: TestMinikubeProfile (50.46s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-530088 --memory=3072 --mount-string /tmp/TestMountStartserial26980820/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-530088 --memory=3072 --mount-string /tmp/TestMountStartserial26980820/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.658007637s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-530088 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-544027 --memory=3072 --mount-string /tmp/TestMountStartserial26980820/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-544027 --memory=3072 --mount-string /tmp/TestMountStartserial26980820/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.563089283s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-544027 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-530088 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-530088 --alsologtostderr -v=5: (1.708655895s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-544027 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-544027
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-544027: (1.241276961s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.19s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-544027
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-544027: (6.190814802s)
--- PASS: TestMountStart/serial/RestartStopped (7.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-544027 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (90.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-008767 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1018 14:52:53.418313   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 14:52:58.354142   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-008767 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m29.652816692s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (90.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008767 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008767 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-008767 -- rollout status deployment/busybox: (3.470378033s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008767 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008767 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008767 -- exec busybox-7b57f96db7-d49gx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008767 -- exec busybox-7b57f96db7-gxf26 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008767 -- exec busybox-7b57f96db7-d49gx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008767 -- exec busybox-7b57f96db7-gxf26 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008767 -- exec busybox-7b57f96db7-d49gx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008767 -- exec busybox-7b57f96db7-gxf26 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008767 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008767 -- exec busybox-7b57f96db7-d49gx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008767 -- exec busybox-7b57f96db7-d49gx -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008767 -- exec busybox-7b57f96db7-gxf26 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008767 -- exec busybox-7b57f96db7-gxf26 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-008767 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-008767 -v=5 --alsologtostderr: (23.292533835s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.92s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-008767 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 cp testdata/cp-test.txt multinode-008767:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 ssh -n multinode-008767 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 cp multinode-008767:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile269655399/001/cp-test_multinode-008767.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 ssh -n multinode-008767 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 cp multinode-008767:/home/docker/cp-test.txt multinode-008767-m02:/home/docker/cp-test_multinode-008767_multinode-008767-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 ssh -n multinode-008767 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 ssh -n multinode-008767-m02 "sudo cat /home/docker/cp-test_multinode-008767_multinode-008767-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 cp multinode-008767:/home/docker/cp-test.txt multinode-008767-m03:/home/docker/cp-test_multinode-008767_multinode-008767-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 ssh -n multinode-008767 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 ssh -n multinode-008767-m03 "sudo cat /home/docker/cp-test_multinode-008767_multinode-008767-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 cp testdata/cp-test.txt multinode-008767-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 ssh -n multinode-008767-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 cp multinode-008767-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile269655399/001/cp-test_multinode-008767-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 ssh -n multinode-008767-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 cp multinode-008767-m02:/home/docker/cp-test.txt multinode-008767:/home/docker/cp-test_multinode-008767-m02_multinode-008767.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 ssh -n multinode-008767-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 ssh -n multinode-008767 "sudo cat /home/docker/cp-test_multinode-008767-m02_multinode-008767.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 cp multinode-008767-m02:/home/docker/cp-test.txt multinode-008767-m03:/home/docker/cp-test_multinode-008767-m02_multinode-008767-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 ssh -n multinode-008767-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 ssh -n multinode-008767-m03 "sudo cat /home/docker/cp-test_multinode-008767-m02_multinode-008767-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 cp testdata/cp-test.txt multinode-008767-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 ssh -n multinode-008767-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 cp multinode-008767-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile269655399/001/cp-test_multinode-008767-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 ssh -n multinode-008767-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 cp multinode-008767-m03:/home/docker/cp-test.txt multinode-008767:/home/docker/cp-test_multinode-008767-m03_multinode-008767.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 ssh -n multinode-008767-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 ssh -n multinode-008767 "sudo cat /home/docker/cp-test_multinode-008767-m03_multinode-008767.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 cp multinode-008767-m03:/home/docker/cp-test.txt multinode-008767-m02:/home/docker/cp-test_multinode-008767-m03_multinode-008767-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 ssh -n multinode-008767-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 ssh -n multinode-008767-m02 "sudo cat /home/docker/cp-test_multinode-008767-m03_multinode-008767-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-008767 node stop m03: (1.264334188s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-008767 status: exit status 7 (498.872239ms)

                                                
                                                
-- stdout --
	multinode-008767
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-008767-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-008767-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-008767 status --alsologtostderr: exit status 7 (479.208454ms)

                                                
                                                
-- stdout --
	multinode-008767
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-008767-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-008767-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:53:54.203206  237384 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:53:54.203468  237384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:53:54.203478  237384 out.go:374] Setting ErrFile to fd 2...
	I1018 14:53:54.203483  237384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:53:54.203690  237384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:53:54.203863  237384 out.go:368] Setting JSON to false
	I1018 14:53:54.203891  237384 mustload.go:65] Loading cluster: multinode-008767
	I1018 14:53:54.203953  237384 notify.go:220] Checking for updates...
	I1018 14:53:54.204258  237384 config.go:182] Loaded profile config "multinode-008767": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:53:54.204273  237384 status.go:174] checking status of multinode-008767 ...
	I1018 14:53:54.204695  237384 cli_runner.go:164] Run: docker container inspect multinode-008767 --format={{.State.Status}}
	I1018 14:53:54.223316  237384 status.go:371] multinode-008767 host status = "Running" (err=<nil>)
	I1018 14:53:54.223365  237384 host.go:66] Checking if "multinode-008767" exists ...
	I1018 14:53:54.223644  237384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-008767
	I1018 14:53:54.241153  237384 host.go:66] Checking if "multinode-008767" exists ...
	I1018 14:53:54.241389  237384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 14:53:54.241455  237384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-008767
	I1018 14:53:54.258893  237384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/multinode-008767/id_rsa Username:docker}
	I1018 14:53:54.352937  237384 ssh_runner.go:195] Run: systemctl --version
	I1018 14:53:54.359156  237384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 14:53:54.370996  237384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 14:53:54.428315  237384 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-18 14:53:54.418176251 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 14:53:54.428866  237384 kubeconfig.go:125] found "multinode-008767" server: "https://192.168.67.2:8443"
	I1018 14:53:54.428898  237384 api_server.go:166] Checking apiserver status ...
	I1018 14:53:54.428962  237384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 14:53:54.440805  237384 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1228/cgroup
	W1018 14:53:54.449151  237384 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1228/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 14:53:54.449204  237384 ssh_runner.go:195] Run: ls
	I1018 14:53:54.453146  237384 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1018 14:53:54.457360  237384 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1018 14:53:54.457383  237384 status.go:463] multinode-008767 apiserver status = Running (err=<nil>)
	I1018 14:53:54.457393  237384 status.go:176] multinode-008767 status: &{Name:multinode-008767 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 14:53:54.457412  237384 status.go:174] checking status of multinode-008767-m02 ...
	I1018 14:53:54.457730  237384 cli_runner.go:164] Run: docker container inspect multinode-008767-m02 --format={{.State.Status}}
	I1018 14:53:54.475164  237384 status.go:371] multinode-008767-m02 host status = "Running" (err=<nil>)
	I1018 14:53:54.475184  237384 host.go:66] Checking if "multinode-008767-m02" exists ...
	I1018 14:53:54.475414  237384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-008767-m02
	I1018 14:53:54.492249  237384 host.go:66] Checking if "multinode-008767-m02" exists ...
	I1018 14:53:54.492484  237384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 14:53:54.492523  237384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-008767-m02
	I1018 14:53:54.509627  237384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-89690/.minikube/machines/multinode-008767-m02/id_rsa Username:docker}
	I1018 14:53:54.603342  237384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 14:53:54.615938  237384 status.go:176] multinode-008767-m02 status: &{Name:multinode-008767-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1018 14:53:54.615980  237384 status.go:174] checking status of multinode-008767-m03 ...
	I1018 14:53:54.616251  237384 cli_runner.go:164] Run: docker container inspect multinode-008767-m03 --format={{.State.Status}}
	I1018 14:53:54.633949  237384 status.go:371] multinode-008767-m03 host status = "Stopped" (err=<nil>)
	I1018 14:53:54.633978  237384 status.go:384] host is not running, skipping remaining checks
	I1018 14:53:54.633985  237384 status.go:176] multinode-008767-m03 status: &{Name:multinode-008767-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-008767 node start m03 -v=5 --alsologtostderr: (6.767830036s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-008767
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-008767
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-008767: (29.549267415s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-008767 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-008767 --wait=true -v=5 --alsologtostderr: (52.150929038s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-008767
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-008767 node delete m03: (4.617375313s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-008767 stop: (30.123806368s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-008767 status: exit status 7 (86.820339ms)

                                                
                                                
-- stdout --
	multinode-008767
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-008767-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-008767 status --alsologtostderr: exit status 7 (85.502698ms)

                                                
                                                
-- stdout --
	multinode-008767
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-008767-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 14:55:59.367970  247102 out.go:360] Setting OutFile to fd 1 ...
	I1018 14:55:59.368236  247102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:55:59.368244  247102 out.go:374] Setting ErrFile to fd 2...
	I1018 14:55:59.368249  247102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 14:55:59.368457  247102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 14:55:59.368640  247102 out.go:368] Setting JSON to false
	I1018 14:55:59.368667  247102 mustload.go:65] Loading cluster: multinode-008767
	I1018 14:55:59.368791  247102 notify.go:220] Checking for updates...
	I1018 14:55:59.369057  247102 config.go:182] Loaded profile config "multinode-008767": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 14:55:59.369078  247102 status.go:174] checking status of multinode-008767 ...
	I1018 14:55:59.369511  247102 cli_runner.go:164] Run: docker container inspect multinode-008767 --format={{.State.Status}}
	I1018 14:55:59.387972  247102 status.go:371] multinode-008767 host status = "Stopped" (err=<nil>)
	I1018 14:55:59.388016  247102 status.go:384] host is not running, skipping remaining checks
	I1018 14:55:59.388031  247102 status.go:176] multinode-008767 status: &{Name:multinode-008767 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 14:55:59.388103  247102 status.go:174] checking status of multinode-008767-m02 ...
	I1018 14:55:59.388386  247102 cli_runner.go:164] Run: docker container inspect multinode-008767-m02 --format={{.State.Status}}
	I1018 14:55:59.405714  247102 status.go:371] multinode-008767-m02 host status = "Stopped" (err=<nil>)
	I1018 14:55:59.405745  247102 status.go:384] host is not running, skipping remaining checks
	I1018 14:55:59.405752  247102 status.go:176] multinode-008767-m02 status: &{Name:multinode-008767-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (25.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-008767 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-008767 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (25.223358345s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008767 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (25.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-008767
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-008767-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-008767-m02 --driver=docker  --container-runtime=crio: exit status 14 (65.242325ms)

                                                
                                                
-- stdout --
	* [multinode-008767-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-008767-m02' is duplicated with machine name 'multinode-008767-m02' in profile 'multinode-008767'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-008767-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-008767-m03 --driver=docker  --container-runtime=crio: (20.335357112s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-008767
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-008767: exit status 80 (285.192589ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-008767 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-008767-m03 already exists in multinode-008767-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-008767-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-008767-m03: (2.383578745s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.12s)

                                                
                                    
x
+
TestPreload (96.49s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-177435 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-177435 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (55.358746388s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-177435 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-177435 image pull gcr.io/k8s-minikube/busybox: (2.233961124s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-177435
E1018 14:57:53.420270   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-177435: (5.835731245s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-177435 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1018 14:57:58.355096   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-177435 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (30.426160188s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-177435 image list
helpers_test.go:175: Cleaning up "test-preload-177435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-177435
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-177435: (2.424547269s)
--- PASS: TestPreload (96.49s)

                                                
                                    
x
+
TestScheduledStopUnix (97.64s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-186721 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-186721 --memory=3072 --driver=docker  --container-runtime=crio: (21.713492972s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-186721 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-186721 -n scheduled-stop-186721
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-186721 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1018 14:58:51.134934   93187 retry.go:31] will retry after 51.967µs: open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/scheduled-stop-186721/pid: no such file or directory
I1018 14:58:51.136093   93187 retry.go:31] will retry after 176.075µs: open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/scheduled-stop-186721/pid: no such file or directory
I1018 14:58:51.137219   93187 retry.go:31] will retry after 281.713µs: open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/scheduled-stop-186721/pid: no such file or directory
I1018 14:58:51.138356   93187 retry.go:31] will retry after 485.227µs: open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/scheduled-stop-186721/pid: no such file or directory
I1018 14:58:51.139487   93187 retry.go:31] will retry after 377.13µs: open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/scheduled-stop-186721/pid: no such file or directory
I1018 14:58:51.140610   93187 retry.go:31] will retry after 1.081795ms: open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/scheduled-stop-186721/pid: no such file or directory
I1018 14:58:51.142790   93187 retry.go:31] will retry after 1.629579ms: open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/scheduled-stop-186721/pid: no such file or directory
I1018 14:58:51.145011   93187 retry.go:31] will retry after 1.326336ms: open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/scheduled-stop-186721/pid: no such file or directory
I1018 14:58:51.147216   93187 retry.go:31] will retry after 1.594639ms: open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/scheduled-stop-186721/pid: no such file or directory
I1018 14:58:51.149436   93187 retry.go:31] will retry after 3.100643ms: open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/scheduled-stop-186721/pid: no such file or directory
I1018 14:58:51.153627   93187 retry.go:31] will retry after 8.036061ms: open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/scheduled-stop-186721/pid: no such file or directory
I1018 14:58:51.161779   93187 retry.go:31] will retry after 11.352447ms: open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/scheduled-stop-186721/pid: no such file or directory
I1018 14:58:51.173974   93187 retry.go:31] will retry after 11.716452ms: open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/scheduled-stop-186721/pid: no such file or directory
I1018 14:58:51.186227   93187 retry.go:31] will retry after 13.272929ms: open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/scheduled-stop-186721/pid: no such file or directory
I1018 14:58:51.200497   93187 retry.go:31] will retry after 29.17273ms: open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/scheduled-stop-186721/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-186721 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-186721 -n scheduled-stop-186721
E1018 14:59:16.489957   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/functional-823635/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-186721
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-186721 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-186721
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-186721: exit status 7 (71.76049ms)

                                                
                                                
-- stdout --
	scheduled-stop-186721
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-186721 -n scheduled-stop-186721
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-186721 -n scheduled-stop-186721: exit status 7 (69.760484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-186721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-186721
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-186721: (4.552226436s)
--- PASS: TestScheduledStopUnix (97.64s)

                                                
                                    
x
+
TestInsufficientStorage (9.69s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-157203 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-157203 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.210568306s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c1d38523-3142-4e55-81e1-b88e05050aec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-157203] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a7e4403-b522-439c-b9da-5b975a3c51bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21409"}}
	{"specversion":"1.0","id":"e5f73d54-7701-46f8-9167-81af55418fad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"23700f08-32f8-4b01-9b3f-2139d8810817","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig"}}
	{"specversion":"1.0","id":"fa320a2d-46d6-4a7b-92c8-1071d79a93ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube"}}
	{"specversion":"1.0","id":"4ff5e838-eb54-4bd9-a722-cb5d404d8d43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e654054a-4f17-4d87-9489-e3f60cb9bc49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d5d0ae9c-205d-4558-b7df-5e662ebd8e1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"11ea9a66-3581-4b82-af91-cd2b75948ab0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"11d39db3-5d6f-4c2a-8ae4-8d8a77d4c85e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3836639f-70cb-4fba-b6e8-de8907ff8e18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7e6afbb0-03cf-4173-b29c-f2651d879385","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-157203\" primary control-plane node in \"insufficient-storage-157203\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b0d82af-7c8c-4398-b127-fe61ca81f20f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ed2be72b-0a5a-4161-865c-37510dff69ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8a155367-15e5-4dce-8cd4-c9b837ade9ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-157203 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-157203 --output=json --layout=cluster: exit status 7 (278.187904ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-157203","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-157203","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 15:00:14.116710  267205 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-157203" does not appear in /home/jenkins/minikube-integration/21409-89690/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-157203 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-157203 --output=json --layout=cluster: exit status 7 (277.6327ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-157203","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-157203","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 15:00:14.395653  267316 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-157203" does not appear in /home/jenkins/minikube-integration/21409-89690/kubeconfig
	E1018 15:00:14.406307  267316 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/insufficient-storage-157203/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-157203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-157203
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-157203: (1.924025279s)
--- PASS: TestInsufficientStorage (9.69s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.89s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1981152582 start -p running-upgrade-816356 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1981152582 start -p running-upgrade-816356 --memory=3072 --vm-driver=docker  --container-runtime=crio: (43.920072462s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-816356 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-816356 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.025365894s)
helpers_test.go:175: Cleaning up "running-upgrade-816356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-816356
I1018 15:01:23.177195   93187 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2970133016/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 15:01:23.203758   93187 install.go:163] /tmp/TestKVMDriverInstallOrUpdate2970133016/001/docker-machine-driver-kvm2 version is 1.37.0
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-816356: (3.402847234s)
--- PASS: TestRunningBinaryUpgrade (69.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (315.9s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.473438685s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-833162
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-833162: (2.435353814s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-833162 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-833162 status --format={{.Host}}: exit status 7 (93.520023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.953149554s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-833162 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (72.014239ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-833162] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-833162
	    minikube start -p kubernetes-upgrade-833162 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8331622 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-833162 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-833162 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.358357512s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-833162" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-833162
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-833162: (2.454769184s)
--- PASS: TestKubernetesUpgrade (315.90s)

                                                
                                    
x
+
TestMissingContainerUpgrade (79.78s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2579252581 start -p missing-upgrade-635158 --memory=3072 --driver=docker  --container-runtime=crio
E1018 15:02:58.354122   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2579252581 start -p missing-upgrade-635158 --memory=3072 --driver=docker  --container-runtime=crio: (24.281878543s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-635158
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-635158: (10.434369331s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-635158
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-635158 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-635158 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.051896717s)
helpers_test.go:175: Cleaning up "missing-upgrade-635158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-635158
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-635158: (2.384252146s)
--- PASS: TestMissingContainerUpgrade (79.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (61.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2614136890 start -p stopped-upgrade-843119 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2614136890 start -p stopped-upgrade-843119 --memory=3072 --vm-driver=docker  --container-runtime=crio: (44.271353584s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2614136890 -p stopped-upgrade-843119 stop
E1018 15:01:01.421303   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/addons-493618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2614136890 -p stopped-upgrade-843119 stop: (1.92658653s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-843119 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-843119 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (15.280689158s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (61.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-843119
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-843119: (1.072800499s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.07s)

                                                
                                    
x
+
TestPause/serial/Start (69.36s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-552434 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-552434 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m9.362184504s)
--- PASS: TestPause/serial/Start (69.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-286873 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-286873 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (77.739323ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-286873] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (23.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-286873 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-286873 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.488678059s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-286873 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (23.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-286873 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-286873 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.005650003s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-286873 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-286873 status -o json: exit status 2 (327.939122ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-286873","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-286873
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-286873: (2.049303019s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-034446 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-034446 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (162.133724ms)

                                                
                                                
-- stdout --
	* [false-034446] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 15:01:54.154510  291960 out.go:360] Setting OutFile to fd 1 ...
	I1018 15:01:54.154631  291960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:01:54.154636  291960 out.go:374] Setting ErrFile to fd 2...
	I1018 15:01:54.154641  291960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 15:01:54.154873  291960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-89690/.minikube/bin
	I1018 15:01:54.155366  291960 out.go:368] Setting JSON to false
	I1018 15:01:54.156538  291960 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9865,"bootTime":1760789849,"procs":418,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 15:01:54.156643  291960 start.go:141] virtualization: kvm guest
	I1018 15:01:54.158224  291960 out.go:179] * [false-034446] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 15:01:54.159426  291960 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 15:01:54.159422  291960 notify.go:220] Checking for updates...
	I1018 15:01:54.160982  291960 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 15:01:54.162220  291960 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-89690/kubeconfig
	I1018 15:01:54.163636  291960 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-89690/.minikube
	I1018 15:01:54.165013  291960 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 15:01:54.166157  291960 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 15:01:54.167686  291960 config.go:182] Loaded profile config "NoKubernetes-286873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1018 15:01:54.167845  291960 config.go:182] Loaded profile config "kubernetes-upgrade-833162": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:01:54.167956  291960 config.go:182] Loaded profile config "pause-552434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 15:01:54.168050  291960 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 15:01:54.191948  291960 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 15:01:54.192043  291960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 15:01:54.255891  291960 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 15:01:54.244967618 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 15:01:54.256059  291960 docker.go:318] overlay module found
	I1018 15:01:54.257884  291960 out.go:179] * Using the docker driver based on user configuration
	I1018 15:01:54.259093  291960 start.go:305] selected driver: docker
	I1018 15:01:54.259116  291960 start.go:925] validating driver "docker" against <nil>
	I1018 15:01:54.259131  291960 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 15:01:54.260995  291960 out.go:203] 
	W1018 15:01:54.262063  291960 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1018 15:01:54.263319  291960 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-034446 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-034446

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-034446

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-034446

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-034446

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-034446

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-034446

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-034446

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-034446

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-034446

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-034446

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-034446

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-034446" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-034446" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 15:01:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-286873
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 15:01:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-833162
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 15:01:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-552434
contexts:
- context:
cluster: NoKubernetes-286873
extensions:
- extension:
last-update: Sat, 18 Oct 2025 15:01:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-286873
name: NoKubernetes-286873
- context:
cluster: kubernetes-upgrade-833162
user: kubernetes-upgrade-833162
name: kubernetes-upgrade-833162
- context:
cluster: pause-552434
extensions:
- extension:
last-update: Sat, 18 Oct 2025 15:01:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-552434
name: pause-552434
current-context: pause-552434
kind: Config
users:
- name: NoKubernetes-286873
user:
client-certificate: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/NoKubernetes-286873/client.crt
client-key: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/NoKubernetes-286873/client.key
- name: kubernetes-upgrade-833162
user:
client-certificate: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kubernetes-upgrade-833162/client.crt
client-key: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kubernetes-upgrade-833162/client.key
- name: pause-552434
user:
client-certificate: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434/client.crt
client-key: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-034446

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034446"

                                                
                                                
----------------------- debugLogs end: false-034446 [took: 3.028175492s] --------------------------------
helpers_test.go:175: Cleaning up "false-034446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-034446
--- PASS: TestNetworkPlugins/group/false (3.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-286873 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-286873 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.111041417s)
--- PASS: TestNoKubernetes/serial/Start (8.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-286873 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-286873 "sudo systemctl is-active --quiet service kubelet": exit status 1 (317.954403ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (34.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (16.478366117s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (17.571472217s)
--- PASS: TestNoKubernetes/serial/ProfileList (34.05s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.86s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-552434 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-552434 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.849412621s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-286873
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-286873: (1.320780449s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-286873 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-286873 --driver=docker  --container-runtime=crio: (8.947690319s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-286873 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-286873 "sudo systemctl is-active --quiet service kubelet": exit status 1 (313.428352ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (49.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-948537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-948537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.055719045s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (49.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-948537 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [feac9cfc-147a-4085-b9f8-9cf69c26bba9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [feac9cfc-147a-4085-b9f8-9cf69c26bba9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003501727s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-948537 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-165275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-165275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.9349644s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (17.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-948537 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-948537 --alsologtostderr -v=3: (17.036624713s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (17.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-948537 -n old-k8s-version-948537
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-948537 -n old-k8s-version-948537: exit status 7 (67.319324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-948537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-948537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-948537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.171578529s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-948537 -n old-k8s-version-948537
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-165275 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [71470317-9d5b-4040-a765-b12127d06e8f] Pending
helpers_test.go:352: "busybox" [71470317-9d5b-4040-a765-b12127d06e8f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [71470317-9d5b-4040-a765-b12127d06e8f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00376581s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-165275 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-165275 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-165275 --alsologtostderr -v=3: (16.46671071s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-fsjwd" [73d6354b-baf5-405e-9584-b844619eb7e4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004134449s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-165275 -n no-preload-165275
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-165275 -n no-preload-165275: exit status 7 (69.902362ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-165275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-775590 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-775590 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.548521355s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-165275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-165275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.300234561s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-165275 -n no-preload-165275
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-fsjwd" [73d6354b-baf5-405e-9584-b844619eb7e4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004771788s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-948537 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-948537 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-489104 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-489104 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (41.616416696s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-775590 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5580a092-dcd3-46a3-b64b-aef85291de1b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5580a092-dcd3-46a3-b64b-aef85291de1b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004731125s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-775590 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-741831 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-741831 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (26.869926594s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4l599" [3b0395d6-d43d-4c1c-8717-77b473ebcc66] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003566936s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-775590 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-775590 --alsologtostderr -v=3: (16.275468281s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4l599" [3b0395d6-d43d-4c1c-8717-77b473ebcc66] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004106093s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-165275 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-489104 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2ca10c14-7bb9-43cd-9a37-bb2e16dc4b95] Pending
helpers_test.go:352: "busybox" [2ca10c14-7bb9-43cd-9a37-bb2e16dc4b95] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2ca10c14-7bb9-43cd-9a37-bb2e16dc4b95] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005222767s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-489104 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-165275 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-775590 -n embed-certs-775590
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-775590 -n embed-certs-775590: exit status 7 (100.681573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-775590 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (46.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-775590 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-775590 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (46.274022243s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-775590 -n embed-certs-775590
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (46.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-489104 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-489104 --alsologtostderr -v=3: (18.122490976s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.287752194s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-741831 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-741831 --alsologtostderr -v=3: (2.893472822s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-741831 -n newest-cni-741831
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-741831 -n newest-cni-741831: exit status 7 (77.142389ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-741831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-741831 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-741831 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (12.428489138s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-741831 -n newest-cni-741831
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-489104 -n default-k8s-diff-port-489104
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-489104 -n default-k8s-diff-port-489104: exit status 7 (139.853375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-489104 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-489104 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-489104 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.740369118s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-489104 -n default-k8s-diff-port-489104
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-741831 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (42.635587426s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-034446 "pgrep -a kubelet"
I1018 15:07:26.816377   93187 config.go:182] Loaded profile config "auto-034446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-034446 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7b7v7" [87c97258-a2f2-43df-b6b6-1d45668e2ac0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7b7v7" [87c97258-a2f2-43df-b6b6-1d45668e2ac0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004692102s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vfwtr" [0afed0bf-d5d5-45fd-bebd-29ca136ff9e9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004625397s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vfwtr" [0afed0bf-d5d5-45fd-bebd-29ca136ff9e9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003324368s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-775590 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-034446 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-034446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-034446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-775590 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7nj88" [0ad619a3-d6d3-4935-8997-014a4e21b88c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006000444s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (49.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (49.075479699s)
--- PASS: TestNetworkPlugins/group/calico/Start (49.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7nj88" [0ad619a3-d6d3-4935-8997-014a4e21b88c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0045542s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-489104 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (48.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (48.057842875s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (48.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-8bv5s" [a86107b1-1157-4557-8c91-36297f683f57] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00416043s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-489104 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-034446 "pgrep -a kubelet"
I1018 15:08:02.824356   93187 config.go:182] Loaded profile config "kindnet-034446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-034446 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zskv7" [47cd253b-784f-406e-983a-412cad96a4c1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zskv7" [47cd253b-784f-406e-983a-412cad96a4c1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.009701637s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (41.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (41.855318226s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (41.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-034446 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-034446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-034446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (51.669680999s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-dhd56" [024194df-afab-46cc-b394-2b6a1e72af04] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-dhd56" [024194df-afab-46cc-b394-2b6a1e72af04] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004566749s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-034446 "pgrep -a kubelet"
I1018 15:08:43.755750   93187 config.go:182] Loaded profile config "calico-034446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-034446 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5cmbc" [e2a6af31-0bf2-4eae-8ea1-e74f1c38c595] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5cmbc" [e2a6af31-0bf2-4eae-8ea1-e74f1c38c595] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003958574s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-034446 "pgrep -a kubelet"
I1018 15:08:44.288261   93187 config.go:182] Loaded profile config "custom-flannel-034446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-034446 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wnf7h" [e9051ca5-0bf1-4614-8363-1461901f40b8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wnf7h" [e9051ca5-0bf1-4614-8363-1461901f40b8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004466912s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-034446 "pgrep -a kubelet"
I1018 15:08:49.675337   93187 config.go:182] Loaded profile config "enable-default-cni-034446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-034446 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-j9x6j" [a2f8f6ce-9cdf-4c72-a5af-818776fe64ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-j9x6j" [a2f8f6ce-9cdf-4c72-a5af-818776fe64ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003744506s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-034446 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-034446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-034446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-034446 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-034446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-034446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-034446 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-034446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-034446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (39.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-034446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (39.933342292s)
--- PASS: TestNetworkPlugins/group/bridge/Start (39.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-q45w8" [4e6a70b4-952d-4d1f-b8f8-75343e2f6ec8] Running
E1018 15:09:28.254752   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/old-k8s-version-948537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004439643s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-034446 "pgrep -a kubelet"
I1018 15:09:30.882860   93187 config.go:182] Loaded profile config "flannel-034446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-034446 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-r4bwp" [ff4de668-1980-488a-b29d-4949989baa35] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-r4bwp" [ff4de668-1980-488a-b29d-4949989baa35] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004055233s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-034446 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-034446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-034446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-034446 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-034446 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ghcrf" [104cba84-e43a-4d5e-b658-01659f433733] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ghcrf" [104cba84-e43a-4d5e-b658-01659f433733] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003222892s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-034446 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-034446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1018 15:10:05.405983   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:10:05.412432   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:10:05.423887   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 15:10:05.445425   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-034446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1018 15:10:05.487530   93187 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/no-preload-165275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    

Test skip (26/326)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-677415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-677415
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-034446 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-034446

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-034446

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-034446

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-034446

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-034446

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-034446

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-034446

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-034446

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-034446

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-034446

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-034446

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-034446" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-034446" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 15:01:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-286873
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 15:01:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-833162
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 15:01:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-552434
contexts:
- context:
cluster: NoKubernetes-286873
extensions:
- extension:
last-update: Sat, 18 Oct 2025 15:01:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-286873
name: NoKubernetes-286873
- context:
cluster: kubernetes-upgrade-833162
user: kubernetes-upgrade-833162
name: kubernetes-upgrade-833162
- context:
cluster: pause-552434
extensions:
- extension:
last-update: Sat, 18 Oct 2025 15:01:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-552434
name: pause-552434
current-context: pause-552434
kind: Config
users:
- name: NoKubernetes-286873
user:
client-certificate: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/NoKubernetes-286873/client.crt
client-key: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/NoKubernetes-286873/client.key
- name: kubernetes-upgrade-833162
user:
client-certificate: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kubernetes-upgrade-833162/client.crt
client-key: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kubernetes-upgrade-833162/client.key
- name: pause-552434
user:
client-certificate: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434/client.crt
client-key: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-034446

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034446"

                                                
                                                
----------------------- debugLogs end: kubenet-034446 [took: 3.056892689s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-034446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-034446
--- SKIP: TestNetworkPlugins/group/kubenet (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-034446 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-034446

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-034446

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-034446

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-034446

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-034446

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-034446

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-034446

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-034446

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-034446

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-034446

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-034446

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-034446" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-034446

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-034446

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-034446

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-034446

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-034446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-034446" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 15:01:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-833162
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-89690/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 15:01:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-552434
contexts:
- context:
cluster: kubernetes-upgrade-833162
user: kubernetes-upgrade-833162
name: kubernetes-upgrade-833162
- context:
cluster: pause-552434
extensions:
- extension:
last-update: Sat, 18 Oct 2025 15:01:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-552434
name: pause-552434
current-context: pause-552434
kind: Config
users:
- name: kubernetes-upgrade-833162
user:
client-certificate: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kubernetes-upgrade-833162/client.crt
client-key: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/kubernetes-upgrade-833162/client.key
- name: pause-552434
user:
client-certificate: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434/client.crt
client-key: /home/jenkins/minikube-integration/21409-89690/.minikube/profiles/pause-552434/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-034446

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-034446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034446"

                                                
                                                
----------------------- debugLogs end: cilium-034446 [took: 3.432804952s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-034446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-034446
--- SKIP: TestNetworkPlugins/group/cilium (3.59s)

                                                
                                    
Copied to clipboard